+
+The recommanded token threshold for a tool response to the LLM.
+If the response exceeds this threshold, a notification will be raised.
+
+- **Type**: `int`
+- **Default**: 10000
+- **Scope**: Server-wide or per database
+
+
+
+## Ai.Agent.Trimming.Summarization.SummarizationResultPrefix
+
+The text prefix that precedes the summary of the previous conversation.
+
+- **Type**: `string`
+- **Default**: "Summary of previous conversation: "
+- **Scope**: Server-wide or per database
+
+
+
+## Ai.Agent.Trimming.Summarization.SummarizationTaskBeginningPrompt
+
+The instruction text that precedes the serialized conversation when requesting a summary.
+
+- **Type**: `string`
+- **Default**: @"Summarize the following AI conversation into a concise, linear narrative that
+ retains all critical information. Ensure the summary:
+ - Includes key identifiers, usernames, timestamps, and any reference codes
+ - Preserves the original intent of both the user and the assistant in each exchange
+ - Reflects decisions made, suggestions given, preferences expressed, and any changes in direction
+ - Captures tone when relevant (e.g., sarcastic, formal, humorous, concerned)
+ - Omits general filler or small talk unless it contributes to context or tone Format the output in a structured manner (such as bullet points or labeled sections) suitable for fitting into a limited context window. Do not discard any information that contributes to understanding the conversation's flow and outcome."
+- **Scope**: Server-wide or per database
+
+
+
+## Ai.Agent.Trimming.Summarization.SummarizationTaskEndPrompt
+
+The user-role message that triggers the conversation summarization process.
+
+- **Type**: `string`
+- **Default**: "Reminder - go over the entire previous conversation and summarize that according to the original instructions"
+- **Scope**: Server-wide or per database
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/ai-agents_overview.mdx b/versioned_docs/version-7.1/ai-integration/ai-agents/ai-agents_overview.mdx
new file mode 100644
index 0000000000..fa189617c1
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/ai-agents/ai-agents_overview.mdx
@@ -0,0 +1,272 @@
+---
+title: "AI agents: Overview"
+hide_table_of_contents: true
+sidebar_label: Overview
+sidebar_position: 1
+---
+
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+# AI agents: Overview
+
+
+* An AI agent is a highly customizable [mediation component](../../ai-integration/ai-agents/ai-agents_overview#ai-agent-usage-flowchart) that an authorized client can tailor to its needs and install on the server. The agent serves the client by facilitating communication between the client, an LLM, and a RavenDB database.
+
+* Clients can use AI agents to automate complex workflows by leveraging LLM capabilities such as data analysis, decision-making, and natural language processing.
+
+* The LLM can use an AI agent to query the database and request the client to perform actions.
+
+* Granting an LLM access to a credible data source such as a company database can significantly enhance its ability to provide the client with accurate and context-aware responses. Such access can also mitigate LLM behaviors that harm its usability like 'hallucinations' and user-pleasing bias.
+
+* Delegating the communication with the LLM to an AI agent can significantly reduce client code complexity and development overhead.
+
+* In this article:
+ * [Defining and running AI agents](../../ai-integration/ai-agents/ai-agents_overview#defining-and-running-an-ai-agent)
+ * [The main stages in defining an AI agent](../../ai-integration/ai-agents/ai-agents_overview#the-main-stages-in-defining-an-ai-agent)
+ * [What is a conversation](../../ai-integration/ai-agents/ai-agents_overview#what-is-a-conversation)
+ * [Initiating a conversation](../../ai-integration/ai-agents/ai-agents_overview#initiating-a-conversation)
+ * [AI agent usage flowchart](../../ai-integration/ai-agents/ai-agents_overview#ai-agent-usage-flowchart)
+ * [Streaming LLM responses](../../ai-integration/ai-agents/ai-agents_overview#streaming-llm-responses)
+ * [Reducing throughput and expediting LLM response](../../ai-integration/ai-agents/ai-agents_overview#reducing-throughput-and-expediting-llm-response)
+ * [Common use cases](../../ai-integration/ai-agents/ai-agents_overview#common-use-cases)
+
+
+
+## Defining and running an AI agent
+
+AI agents can be created by RavenDB clients (providing they have database administration permissions).
+They reside on a RavenDB server, and can be invoked by clients to, for example, handle user requests and respond to events tracked by the client.
+
+
+An agent can serve multiple clients concurrently.
+* The agent's **layout**, including its configuration, logic, and tools is shared by all the clients that use the agent.
+* **Conversations** that clients conduct with the agent are isolated per conversation.
+ Each client maintains its own conversation instance with the agent with complete privacy, including -
+ * Parameter values that the client may pass to the agent
+ * All conversation content and history
+ * Results received when the conversation ends
+
+
+
+* [Learn to create an AI agent using the client API](../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_api)
+* [Learn to create an AI agent using Studio](../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_studio)
+
+
+### The main stages in defining an AI agent:
+To define an AI agent, the client needs to specify -
+
+* A **connection string** to the AI model.
+ [Create a connection string using the API](../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_api#creating-a-connection-string)
+ [Create a connection string using Studio](../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_studio#configure-basic-settings)
+
+* An **agent configuration** that defines the agent.
+ [Define an agent configuration using the API](../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_api#defining-an-agent-configuration)
+ [Define an agent configuration using Studio](../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_studio#configure-basic-settings)
+
+ An agent configuration includes -
+ * **Basic agent settings**, like the unique ID by which the system recognizes the task.
+ * A **system prompt** by which the agent instructs the AI model what its characteristics are, e.g. its role.
+ * Optional **agent parameters**.
+ Agent parameters' values are provided by the client when it starts a conversation with the agent, and can be used in queries initiated by the LLM (see **query tools** below).
+ * Optional **query tools**.
+ The LLM will be able to invoke query tools freely to retrieve data from the database.
+ * **Read-only operations**
+ Query tools can apply **read operations** only.
+ To make changes in the database, use [action tools](../../ai-integration/ai-agents/ai-agents_overview#action-tools).
+
+ Note that actions can be performed only by the client. The LLM can just request the client to perform actions on its behalf.
+
+ * **Database access**
+ The LLM has no direct access to the database. To use a query tool, it must send a query request to the agent, which will send the RQL query defined by the tool to the database and pass its results to the LLM.
+ * **Query parameters**
+ The RQL query defined by a query tool may optionally include parameters, identified by a `$` prefix.
+ Both the user and the LLM can pass values to these parameters.
+ **Users** can pass values to query parameters through **agent parameters**,
+ when the client starts a conversation with the agent.
+ **The LLM** can pass values to queries through a **parameters schema**,
+ outlined as part of the query tool, when requesting the agent to run the query.
+ * **Initial-context queries**
+ You can optionally set a query tool as an **initial-context query**.
+ Queries that are **not** set this way are invoked when the LLM requests the agent to run them.
+ Queries that **are** set as initial-context queries are executed by the agent immediately when it starts a conversation with the LLM, without waiting for the LLM to invoke them, to include data that is relevant for the conversation in the initial context sent to the LLM.
+ E.g., an initial-context query can provide the LLM, before the actual conversation starts, the last 5 orders placed by a customer, as context for an answer that the LLM is requested to provide about the customer's order history.
+
+ * Optional **action tools** that the LLM will be able to invoke freely.
+ The LLM will be able to use these tools to request the client to perform actions.
+
+### What is a conversation:
+A conversation is a communication session between the client, the agent, and the LLM that maintains the history of messages exchanged between these participants since the conversation began.
+* The conversation starts when the client invokes the agent and provides it with an [initial context](../../ai-integration/ai-agents/ai-agents_overview#initiating-a-conversation).
+* The conversation may include multiple "turns" of message exchanges between the client and the LLM, mediated by the agent.
+ * Each turn starts with a new **user prompt** from the client.
+ * During the turn, the LLM can trigger the agent to run queries or request the client to perform actions, using [defined query and action tools](../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_api#adding-agent-tools).
+ * The turn ends with an [LLM response](../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_api#conversation-response) to the user prompt.
+ The response may trigger a new turn (e.g., by requesting more information),
+ or be the final LLM response and end the conversation.
+* The agent maintains the continuity of the conversation by [storing all messages](../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_api#setting-a-conversation) exchanged since the conversation began in a dedicated document in the `@conversation` collection and providing all stored messages to the LLM with each new agent message.
+* The conversation ends when the LLM provides the agent with its final response.
+
+[Initiate a conversation using the API](../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_api#managing-conversations)
+[Initiate a conversation using Studio](../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_studio#start-new-chat)
+
+### Initiating a conversation:
+To start a conversation with the LLM, the agent will send it an **initial context** that includes -
+
+* The pre-defined [agent configuration](../../ai-integration/ai-agents/ai-agents_overview#the-main-stages-in-defining-an-ai-agent) (automatically sent by the agent) with:
+ * The system prompt
+ * A response object that defines the layout for the LLM response
+ * Optional agent parameters
+ * Optional Query tools
+ (and if any query tool is configured as an [initial-context query](../../ai-integration/ai-agents/ai-agents_overview#initial-context-queries) - results for this query)
+ * Optional Action tools
+
+* **Values for agent parameters**
+ If agent parameters were defined in the agent configuration, the client is required to provide their values to the agent when starting a conversation.
+
+ E.g.,
+ The agent configuration may include an agent parameter called `employeeId`.
+ A query tool may include an RQL query like `from Employees as E where id() == $employeeId`, using this agent parameter.
+ When the client starts a conversation with the agent, it will be required to provide the value for `employeeId`, e.g. `employees/8-A`.
+ When the LLM requests the agent to invoke this query tool, the agent will replace `$employeeId` with `employees/8-A` before running the query.
+ [See an example that utilizes this agent parameter](../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_api#query-tools).
+
+
+ Providing query values when starting a conversation gives the client the ability to shape and limit the scope of LLM queries by its objectives.
+
+
+* **Stored conversation messages**
+ Since the LLM keeps no record of previous messages, the agent is responsible for allowing a continuous conversation.
+ It achieves this by automatically recording all messages of each conversation in a dedicated document in the `@conversations` collection.
+ When the agent needs to continue a conversation, it will pull all previous messages from the `@conversations` collection document, and send them to the LLM.
+ The conversation will remain available in the `@conversations` collection even after it ends, so it can be resumed at any future time.
+
+* A **user prompt**, set by the client, that defines, for example, a question or a request for particular information.
+
+
+
+## AI agent usage flowchart
+
+The flowchart below illustrates interactions between the User, RavenDB client, AI agent, AI model, and RavenDB database.
+
+
+
+1. **User`<->`Client** flow
+ Users can use clients that interact with the AI agent.
+ The user can provide agent parameters values through the client, and get responses from the agent.
+
+2. **Client`<->`Database** flow
+ The client can interact with the database directly, either by its own initiative or as a result of AI agent action requests (query requests are handled by the agent).
+
+3. **Client`<->`Agent** flow
+ * To invoke an agent, the client needs to provide it with an [initial context](../../ai-integration/ai-agents/ai-agents_overview#initiating-a-conversation).
+ * During the conversation, the agent may send to the client action requests on behalf of the LLM.
+ * When the LLM provides the agent with its final response, the agent will provide it to the client.
+ The client does not need to reply to this message.
+ * E.g., the client can pass the agent a research topic, a user prompt that guides the AI model to act as a research assistant, and all the messages that were included in the conversation so far.
+ The agent can respond with a summary of the research topic, and a request for the client to save it in the database.
+
+4. **Agent`<->`Database** flow
+ * The agent can query the database on behalf of the AI model.
+ When the query ends, the agent will return its results to the AI model.
+ * When the agent is requested to run a query that includes _agent parameters_, it will replace these parameters with values provided by the client before running the query.
+ * When the agent is requested to run a query that includes _LLM parameters_, it will replace these parameters with values provided by the LLM before running the query.
+
+5. **Agent`<->`Model** flow
+ * **When a conversation is started**, the agent needs to provide the AI model with an [initial context](../../ai-integration/ai-agents/ai-agents_overview#initiating-a-conversation), partly defined by the agent configuration and partly by the client.
+ * **During the conversation**, the AI model can respond to the agent with -
+ * Requests for queries.
+ If a query includes LLM parameters, the LLM will include values for them, and the agent will replace the parameters with these values, run the query, and return its results to the LLM.
+ If a query includes agent parameters, the agent will replace them with values provided by the client, run the query, and return its results to the LLM.
+ * Requests for actions.
+ The agent will pass such requests to the client and return their results to the LLM.
+ * The final response to the user prompt, in the layout defined by the response object.
+ The agent will pass the response to the client (which doesn't need to reply to it).
+
+
+
+## Streaming LLM responses
+
+Rather than wait for the LLM to finish generating a response and then pass it in its entirety to the client, the agent can stream response chunks (determined by the LLM, e.g. words or symbols) to the client one by one, immediately as each chunk is returned by the LLM, allowing the client to process and display the response gradually.
+
+Streaming can ease the processing of lengthy LLM responses for clients, and create a better user experience by keeping users from waiting and providing them with a continuous, fluent interaction.
+
+Streaming is supported by most AI models, including OpenAI services like GPT-4 and Ollama models.
+
+[Streaming LLM responses using the API](../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_api#stream-llm-responses)
+
+
+
+## Reducing throughput and expediting LLM response
+
+If throughput and LLM response time are considerations, the following suggestions can help optimize performance:
+
+### Define a chat trimming configuration:
+
+The LLM doesn't keep conversation history. To allow a continuous conversation, the agent precedes each new message it sends to the LLM with all the messages that were exchanged in the conversation since it started.
+
+To save traffic and tokens, you can summarize conversations using **chat trimming**. This can be helpful when transfer rate and cost are a concern or the context becomes too large to handle efficiently.
+
+[Configuring chat trimming using the API](../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_api#set-chat-trimming-configuration)
+[Configuring chat trimming using Studio](../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_studio#configure-chat-trimming)
+
+### Optimize query tools:
+
+When creating query tools -
+* Provide the LLM with clear instructions on how to use each query tool effectively.
+* Narrow your queries:
+ * Design queries to return only the data that is relevant to the agent's role and the user's prompt.
+ * You can limit the scope of a query both in the RQL statement itself and by using agent parameters to filter results.
+ * Avoid overly broad queries that return large datasets, as they can overwhelm the LLM and lead to slower response times.
+ * Consider projecting only relevant properties and setting a limit on the number of results returned by each query to prevent excessive data transfer and processing, e.g. -
+
+
+ ```rql
+ from Orders as O where O.ShipTo.Country == $country
+ ```
+
+
+ ```rql
+ from Orders as O where O.ShipTo.Country == $country select O.Employee, O.Lines.Quantity limit 4
+ ```
+
+
+
+* Supervise querying:
+ * Test query tools with various prompts and scenarios to identify and address any performance bottlenecks.
+ * Monitor the performance of query tools in production to identify and address any issues that arise over time.
+ * Regularly review and update query tools to ensure they remain relevant and efficient as the database evolves.
+
+[Creating query tools using the API](../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_api#query-tools)
+[Creating query tools using Studio](../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_studio#add-query-tools)
+
+### Set maximum number of querying iterations:
+
+You can limit the number of times that the LLM is allowed to trigger database queries in response to a single user prompt.
+
+[Setting iterations limit using the API](../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_api#set-maximum-number-of-iterations)
+
+
+
+## Common use cases
+
+AI agents are designed to easily integrate AI capabilities into applications and workflows. They can interact with users, intelligently retrieve and process data from proprietary databases, and apply actions based on roles they are requested to take and the data they have access to. Some of the tasks and applications they can be tailored to perform include -
+
+#### Customer support chatbot agents
+Agents can answer customer queries based on information stored in databases and internal knowledge bases, provide troubleshooting steps, and guide users through processes in real time.
+
+#### Data analysis and reporting agents
+Agents can analyze large datasets to extract relevant data and present it in a user-friendly format, escalate customer issues and application output, create reports and highlight points of interest, and help businesses make informed decisions.
+
+#### Content generation agents
+Agents can generate summaries, add automated comments to articles and application-generated content, reference readers to related material, and create marketing content based on user input and stored information.
+
+#### Workflow automation agents
+Agents can automate repetitive tasks like email sorting, spam filtering, form filling, or file organization.
+
+#### Intelligent recommendation agents
+Agents can provide personalized recommendations based on user preferences and available data, e.g. a _library assistant_ suggesting books and other resources, an _HR office assistant_ recommending rewards for employees based on their performance and available facilities near their residence, or an _e-commerce assistant_ recommending products.
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/ai-agents_security-concerns.mdx b/versioned_docs/version-7.1/ai-integration/ai-agents/ai-agents_security-concerns.mdx
new file mode 100644
index 0000000000..295909a052
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/ai-agents/ai-agents_security-concerns.mdx
@@ -0,0 +1,95 @@
+---
+title: "AI agents: Security concerns"
+hide_table_of_contents: true
+sidebar_label: "Security concerns"
+sidebar_position: 4
+---
+
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+# AI agents: Security concerns
+
+
+This page covers potential security concerns related to AI agents and the mitigation strategies for these concerns.
+
+* On this page:
+ * [Unauthorized database access](../../ai-integration/ai-agents/ai-agents_security-concerns#unauthorized-database-access)
+ * [Data compromise during transit](../../ai-integration/ai-agents/ai-agents_security-concerns#data-compromise-during-transit)
+ * [Untraceable malicious or unexpected actions](../../ai-integration/ai-agents/ai-agents_security-concerns#untraceable-malicious-or-unexpected-actions)
+ * [AI model data memorization](../../ai-integration/ai-agents/ai-agents_security-concerns#ai-model-data-memorization)
+ * [Validation or injection attacks via user input](../../ai-integration/ai-agents/ai-agents_security-concerns#validation-or-injection-attacks-via-user-input)
+
+
+
+## Unauthorized database access
+
+Concern: Unauthorized access to databases can lead to data breaches.
+
+* **Mitigation: Read-only access**
+ The LLM has no direct access to the database. It can only request the agent, via query tools, to query the database on its behalf, and the agent can only apply read-only operations.
+
+* **Mitigation: DBA control**
+ Control over the database is determined using certificates. Only users whose certificates grant them a database administrator or a higher role can create and manage agents.
+ The DBA retains full control over connections to the AI model (through connection strings), the agent configuration, and the queries that the agent is allowed to run.
+
+* **Mitigation: Agent scope**
+ An AI agent is created for a specific database and has no access to other databases on the server, ensuring database-level isolation.
+
+
+
+## Data compromise during transit
+
+Concern: Data may be compromised during transit.
+
+* **Mitigation: Secure TLS (Transport Layer Security) communication**
+ All data is transferred over HTTPS between the client, the agent, the database, and the AI model, to ensure its encryption during transit.
+
+
+
+## Untraceable malicious or unexpected actions
+
+Concern: Inability to trace malicious or unexpected actions related to agents.
+
+* **Mitigation: Audit logging**
+ RavenDB [admin logs](../../studio/server/debug/admin-logs/) track the creation, modification, and deletion of AI agents, as well as agent interactions with the database.
+
+ Example of an audit log entry recorded when an agent was deleted:
+ ```
+ Starting to process record 16 (current 15) for aiAgent_useHandleToRunChat_1.
+ Type: DeleteAiAgentCommand.
+ Cluster database change type: RecordChanged
+ Date 2025-09-23 22:29:45.0391
+ Level DEBUG
+ Thread ID 58
+ Resource aiAgent_useHandleToRunChat_1
+ Logger Raven.Server.Documents.DocumentDatabase
+ ```
+
+
+## AI model data memorization
+
+Concern: Sensitive data might inadvertently be memorized and reproduced by the AI model.
+
+* **Mitigation: Free selection of AI model**
+ RavenDB doesn't enforce the usage of specific providers or AI models, but gives you free choice of the services that best suit your needs and security requirements.
+ When using the service of your choice, it is your responsibility to define safe queries and expose only the data that it is in your interest to share with the AI model.
+
+* **Mitigation: Agent parameters**
+ You can use [agent parameters](../../ai-integration/ai-agents/ai-agents_overview#query-parameters) to limit the scope of the defined query and the dataset subsequently transferred to the AI model.
+
+
+
+## Validation or injection attacks via user input
+
+Concern: Validation or injection attacks crafted through malicious user input.
+
+* **Mitigation: Query scope**
+ The agent queries a limited subset of the stored data, restricting an attacker's access to the rest of the data and to data belonging to other users.
+
+* **Mitigation: Read-only access**
+ Query tools can apply read-only RQL queries, preventing attackers from modifying any data.
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/ai-agents_start.mdx b/versioned_docs/version-7.1/ai-integration/ai-agents/ai-agents_start.mdx
new file mode 100644
index 0000000000..a5bfd34959
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/ai-agents/ai-agents_start.mdx
@@ -0,0 +1,53 @@
+---
+title: "AI Agents: Start"
+hide_table_of_contents: true
+sidebar_label: Start
+sidebar_position: 0
+---
+
+import Admonition from '@theme/Admonition';
+
+import CardWithImage from "@site/src/components/Common/CardWithImage";
+import CardWithImageHorizontal from "@site/src/components/Common/CardWithImageHorizontal";
+import ColGrid from "@site/src/components/ColGrid";
+import practicalLookAiAgentsImage from "../assets/practical-look-ai-agents-article-image.webp";
+
+import webinarThumbnailPlaceholder from "@site/static/img/webinar.webp";
+
+# AI Agents
+
+### Create conversational AI proxies for your applications.
+AI agents are server-side components that act as secure proxies between RavenDB clients and AI models. They can be easily customized to handle specific client needs, tasks or workflows, such as answering questions, performing data analysis, or automating processes.
+ - Using AI agents frees developers from the need to manage the communication with the AI model in their code, and enables rapid integration of AI capabilities into their applications.
+ - An agent receives requests from clients and maintains continuous conversations with AI models to fulfill them. During the conversation, the agent can enable the model to securely query a RavenDB database (e.g., fetch recent orders or run vector searches on products) and request the client to perform actions (like sending emails or creating new orders).
+- You can use AI agents to quickly create an intelligent, actionable, conversational interface for your applications, in a way that abstracts much of the complexity of AI integration.
+
+### Use cases
+Creating an AI agent and assigning it a role can be done in minutes using Studio or the API, making it easy to address a wide variety of use cases like -
+* Customer support chatbot agents
+* Data analysis and reporting agents
+* Content generation agents
+* Workflow automation agents
+* Intelligent recommendation agents
+
+### Technical documentation
+Use the technical documentation to learn how to create and manage AI agents, configure secure database access, enable agents to trigger client actions, and more.
+
+
+
+
+
+
+#### Learn more: In-depth AI agents articles
+
+
+
+
+### Related lives & Videos
+Watch our webinars to see AI agents in action and learn practical implementation techniques.
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/assets/ai-agents_flowchart.png b/versioned_docs/version-7.1/ai-integration/ai-agents/assets/ai-agents_flowchart.png
new file mode 100644
index 0000000000..0b295c196d
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/assets/ai-agents_flowchart.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/assets/ai-agents_start_apiImage.png b/versioned_docs/version-7.1/ai-integration/ai-agents/assets/ai-agents_start_apiImage.png
new file mode 100644
index 0000000000..834870f37d
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/assets/ai-agents_start_apiImage.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/assets/ai-agents_start_ovImage.png b/versioned_docs/version-7.1/ai-integration/ai-agents/assets/ai-agents_start_ovImage.png
new file mode 100644
index 0000000000..94cf289840
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/assets/ai-agents_start_ovImage.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/assets/ai-agents_start_studioImage.png b/versioned_docs/version-7.1/ai-integration/ai-agents/assets/ai-agents_start_studioImage.png
new file mode 100644
index 0000000000..93c97251cc
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/assets/ai-agents_start_studioImage.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/_category_.json b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/_category_.json
new file mode 100644
index 0000000000..55cd0dacc8
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/_category_.json
@@ -0,0 +1,4 @@
+{
+ "position": 2,
+ "label": "Creating AI Agents"
+}
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_ai-agents-view.png b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_ai-agents-view.png
new file mode 100644
index 0000000000..8e82023b9c
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_ai-agents-view.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_config-basic-settings.png b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_config-basic-settings.png
new file mode 100644
index 0000000000..9f0fe06795
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_config-basic-settings.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_config-basic-settings_schema.png b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_config-basic-settings_schema.png
new file mode 100644
index 0000000000..16386152a1
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_config-basic-settings_schema.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_config-chat-trimming.png b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_config-chat-trimming.png
new file mode 100644
index 0000000000..6cc2d1a996
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_config-chat-trimming.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_connection-string.png b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_connection-string.png
new file mode 100644
index 0000000000..feb4231e1f
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_connection-string.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_connection-string_select-or-create.png b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_connection-string_select-or-create.png
new file mode 100644
index 0000000000..dcdc557b30
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_connection-string_select-or-create.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_create-ai-agent.png b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_create-ai-agent.png
new file mode 100644
index 0000000000..f6239d076e
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_create-ai-agent.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_define-agent-tools.png b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_define-agent-tools.png
new file mode 100644
index 0000000000..e736aa4321
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_define-agent-tools.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_define-agent-tools_add-action-tool.png b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_define-agent-tools_add-action-tool.png
new file mode 100644
index 0000000000..33e8823a4c
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_define-agent-tools_add-action-tool.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_define-agent-tools_add-query-tool.png b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_define-agent-tools_add-query-tool.png
new file mode 100644
index 0000000000..c6dc7f1161
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_define-agent-tools_add-query-tool.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_run-agent.png b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_run-agent.png
new file mode 100644
index 0000000000..30cab35916
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_run-agent.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_running_action-tool.png b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_running_action-tool.png
new file mode 100644
index 0000000000..fbac6d49b7
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_running_action-tool.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_running_llm-response-minimized.png b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_running_llm-response-minimized.png
new file mode 100644
index 0000000000..8806bdd6b7
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_running_llm-response-minimized.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_running_params.png b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_running_params.png
new file mode 100644
index 0000000000..52e545294d
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_running_params.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_running_prompts.png b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_running_prompts.png
new file mode 100644
index 0000000000..74e865db76
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_running_prompts.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_running_query-tool.png b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_running_query-tool.png
new file mode 100644
index 0000000000..2b050f7db5
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_running_query-tool.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_running_raw-data.png b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_running_raw-data.png
new file mode 100644
index 0000000000..52db6317b8
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_running_raw-data.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_runtime-view.png b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_runtime-view.png
new file mode 100644
index 0000000000..91ac663fff
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_runtime-view.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_save-agent.png b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_save-agent.png
new file mode 100644
index 0000000000..aee0b44e1e
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_save-agent.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_set-agent-params.png b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_set-agent-params.png
new file mode 100644
index 0000000000..a93a512867
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_set-agent-params.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_test-agent_run-test.png b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_test-agent_run-test.png
new file mode 100644
index 0000000000..cf36ea16ae
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_test-agent_run-test.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_test-agent_test-button.png b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_test-agent_test-button.png
new file mode 100644
index 0000000000..bd97ab4ee2
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_test-agent_test-button.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_test-results_minimized.png b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_test-results_minimized.png
new file mode 100644
index 0000000000..964d7638fe
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_test-results_minimized.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_your-agent.png b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_your-agent.png
new file mode 100644
index 0000000000..f7e97944da
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/ai-agents_your-agent.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_ai-agents-view.snagx b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_ai-agents-view.snagx
new file mode 100644
index 0000000000..05acef667d
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_ai-agents-view.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_config-basic-settings.snagx b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_config-basic-settings.snagx
new file mode 100644
index 0000000000..b28b2d8d88
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_config-basic-settings.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_config-basic-settings_schema.snagx b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_config-basic-settings_schema.snagx
new file mode 100644
index 0000000000..c024726034
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_config-basic-settings_schema.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_config-chat-trimming.snagx b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_config-chat-trimming.snagx
new file mode 100644
index 0000000000..83a260100a
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_config-chat-trimming.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_config-chat-trimming_summarization-settings.snagx b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_config-chat-trimming_summarization-settings.snagx
new file mode 100644
index 0000000000..f6e4effc8f
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_config-chat-trimming_summarization-settings.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_config-chat-trimming_summarization-settings_history-expiration.snagx b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_config-chat-trimming_summarization-settings_history-expiration.snagx
new file mode 100644
index 0000000000..88d7b36b03
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_config-chat-trimming_summarization-settings_history-expiration.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_config-chat-trimming_truncation-settings.snagx b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_config-chat-trimming_truncation-settings.snagx
new file mode 100644
index 0000000000..30d85702da
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_config-chat-trimming_truncation-settings.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_connection-string.snagx b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_connection-string.snagx
new file mode 100644
index 0000000000..c027fe57d3
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_connection-string.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_connection-string_select-or-create.snagx b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_connection-string_select-or-create.snagx
new file mode 100644
index 0000000000..37e999d0b5
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_connection-string_select-or-create.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_conversations.snagx b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_conversations.snagx
new file mode 100644
index 0000000000..1dde92fb1e
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_conversations.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_create-ai-agent.snagx b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_create-ai-agent.snagx
new file mode 100644
index 0000000000..d04ca31692
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_create-ai-agent.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_define-agent-tools.snagx b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_define-agent-tools.snagx
new file mode 100644
index 0000000000..d67f29aa6f
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_define-agent-tools.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_define-agent-tools_add-action-tool.snagx b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_define-agent-tools_add-action-tool.snagx
new file mode 100644
index 0000000000..15c97f67f1
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_define-agent-tools_add-action-tool.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_define-agent-tools_add-query-tool.snagx b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_define-agent-tools_add-query-tool.snagx
new file mode 100644
index 0000000000..cd4d353b5e
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_define-agent-tools_add-query-tool.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_run-agent.snagx b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_run-agent.snagx
new file mode 100644
index 0000000000..8ca1cca7ac
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_run-agent.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_running_action-tool.snagx b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_running_action-tool.snagx
new file mode 100644
index 0000000000..96ca8ea820
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_running_action-tool.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_running_llm-response-minimized.snagx b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_running_llm-response-minimized.snagx
new file mode 100644
index 0000000000..6f1812864e
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_running_llm-response-minimized.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_running_params.snagx b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_running_params.snagx
new file mode 100644
index 0000000000..99abc1458b
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_running_params.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_running_prompts.snagx b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_running_prompts.snagx
new file mode 100644
index 0000000000..499952c701
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_running_prompts.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_running_query-tool.snagx b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_running_query-tool.snagx
new file mode 100644
index 0000000000..1773184308
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_running_query-tool.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_running_raw-data.snagx b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_running_raw-data.snagx
new file mode 100644
index 0000000000..f9fc29b12c
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_running_raw-data.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_runtime-view.snagx b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_runtime-view.snagx
new file mode 100644
index 0000000000..7782b9b36b
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_runtime-view.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_save-agent.snagx b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_save-agent.snagx
new file mode 100644
index 0000000000..09c543c7fc
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_save-agent.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_set-agent-params.snagx b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_set-agent-params.snagx
new file mode 100644
index 0000000000..fb02342cca
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_set-agent-params.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_set-agent-params_params-list.snagx b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_set-agent-params_params-list.snagx
new file mode 100644
index 0000000000..77458f246f
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_set-agent-params_params-list.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_set-chat-persistence.snagx b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_set-chat-persistence.snagx
new file mode 100644
index 0000000000..0c878d48da
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_set-chat-persistence.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_set-chat-persistence_set-expiration.snagx b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_set-chat-persistence_set-expiration.snagx
new file mode 100644
index 0000000000..839ea33cf7
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_set-chat-persistence_set-expiration.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_test-agent_run-test.snagx b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_test-agent_run-test.snagx
new file mode 100644
index 0000000000..a7b62e98dc
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_test-agent_run-test.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_test-agent_test-button.snagx b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_test-agent_test-button.snagx
new file mode 100644
index 0000000000..38e2604d64
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_test-agent_test-button.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_test-results_minimized.snagx b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_test-results_minimized.snagx
new file mode 100644
index 0000000000..4e6f0ef18c
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_test-results_minimized.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_your-agent.snagx b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_your-agent.snagx
new file mode 100644
index 0000000000..541e41132e
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/ai-agents_your-agent.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/gen-ai_hash-flow.snagx b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/gen-ai_hash-flow.snagx
new file mode 100644
index 0000000000..7e6eff9d12
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/gen-ai_hash-flow.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/gen-ai_licensing.snagx b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/gen-ai_licensing.snagx
new file mode 100644
index 0000000000..cb3a8439a6
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/gen-ai_licensing.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/gen-ai_metadata-identifier-and-hash-codes.snagx b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/gen-ai_metadata-identifier-and-hash-codes.snagx
new file mode 100644
index 0000000000..f9b564b644
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/assets/snagit/gen-ai_metadata-identifier-and-hash-codes.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_api.mdx b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_api.mdx
new file mode 100644
index 0000000000..8c420895db
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_api.mdx
@@ -0,0 +1,1289 @@
+---
+title: "Creating AI agents: API"
+hide_table_of_contents: true
+sidebar_label: Client API
+sidebar_position: 1
+---
+
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+# Creating AI agents: API
+
+
+* To create an AI agent, a client defines its configuration, provides it with settings and tools, and registers the agent with the server.
+
+* Once the agent is created, the client can initiate or resume conversations, get LLM responses, and perform actions based on LLM insights.
+
+* This page provides a step-by-step guide to creating an AI agent and interacting with it using the Client API.
+
+* In this article:
+ * [Creating a connection string](../../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_api#creating-a-connection-string)
+ * [Defining an agent configuration](../../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_api#defining-an-agent-configuration)
+ * [Set the agent ID](../../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_api#set-the-agent-id)
+ * [Define a response object](../../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_api#define-a-response-object)
+ * [Add agent parameters](../../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_api#add-agent-parameters)
+ * [Set maximum number of iterations](../../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_api#set-maximum-number-of-iterations)
+ * [Set chat trimming configuration](../../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_api#set-chat-trimming-configuration)
+ * [Adding agent tools](../../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_api#adding-agent-tools)
+ * [Query tools](../../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_api#query-tools)
+ * [Initial-context queries](../../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_api#initial-context-queries)
+ * [Action tools](../../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_api#action-tools)
+ * [Creating the Agent](../../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_api#creating-the-agent)
+ * [Retrieving existing agent configurations](../../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_api#retrieving-existing-agent-configurations)
+ * [Managing conversations](../../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_api#managing-conversations)
+ * [Setting a conversation](../../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_api#setting-a-conversation)
+ * [Processing action-tool requests](../../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_api#processing-action-tool-requests)
+ * [Action-tool Handlers](../../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_api#action-tool-handlers)
+ * [Action-tool Receivers](../../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_api#action-tool-receivers)
+ * [Conversation response](../../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_api#conversation-response)
+ * [Setting user prompt and running the conversation](../../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_api#setting-user-prompt-and-running-the-conversation)
+ * [Stream LLM responses](../../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_api#stream-llm-responses)
+ * [Full Example](../../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_api#full-example)
+
+
+
+## Creating a connection string
+
+Your agent will need a connection string to connect with the LLM. Create a connection string using an `AiConnectionString` instance and the `PutConnectionStringOperation` operation.
+(You can also create a connection string using Studio, see [here](../../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_studio#configure-basic-settings))
+
+You can use a local `Ollama` model if your considerations are mainly speed, cost, open-source, or security,
+Or you can use a remote `OpenAI` service for its additional resources and capabilities.
+
+* **Example**
+
+
+ ```csharp
+ using (var store = new DocumentStore())
+ {
+ // Define the connection string to OpenAI
+ var connectionString = new AiConnectionString
+ {
+ // Connection string name & identifier
+ Name = "open-ai-cs",
+
+ // Connection type
+ ModelType = AiModelType.Chat,
+
+ // OpenAI connection settings
+ OpenAiSettings = new OpenAiSettings(
+ apiKey: "your-api-key",
+ endpoint: "https://api.openai.com/v1",
+ // LLM model for text generation
+ model: "gpt-4.1")
+ };
+
+ // Deploy the connection string to the server
+ var operation = new PutConnectionStringOperation(connectionString);
+ var putConnectionStringResult = store.Maintenance.Send(operation);
+ }
+ ```
+
+
+ ```csharp
+ using (var store = new DocumentStore())
+ {
+ // Define the connection string to Ollama
+ var connectionString = new AiConnectionString
+ {
+ // Connection string name & identifier
+ Name = "ollama-cs",
+
+ // Connection type
+ ModelType = AiModelType.Chat,
+
+ // Ollama connection settings
+ OllamaSettings = new OllamaSettings(
+ // LLM Ollama model for text generation
+ model: "llama3.2",
+ // local URL
+ uri: "http://localhost:11434/")
+ };
+
+ // Deploy the connection string to the server
+ var operation = new PutConnectionStringOperation(connectionString);
+ var putConnectionStringResult = store.Maintenance.Send(operation);
+ }
+ ```
+
+
+
+* **Syntax**
+
+
+ ```csharp
+ public class AiConnectionString
+ {
+ public string Name { get; set; }
+ public AiModelType ModelType { get; set; }
+ public string Identifier { get; set; }
+ public OpenAiSettings OpenAiSettings { get; set; }
+ ...
+ }
+
+ public class OpenAiSettings : AbstractAiSettings
+ {
+ public string ApiKey { get; set; }
+ public string Endpoint { get; set; }
+ public string Model { get; set; }
+ public int? Dimensions { get; set; }
+ public string OrganizationId { get; set; }
+ public string ProjectId { get; set; }
+ }
+ ```
+
+
+ ```csharp
+ public class AiConnectionString
+ {
+ public string Name { get; set; }
+ public AiModelType ModelType { get; set; }
+ public string Identifier { get; set; }
+ public OllamaSettings OllamaSettings { get; set; }
+ ...
+ }
+
+ public class OllamaSettings : AbstractAiSettings
+ {
+ public string Model { get; set; }
+ public string Uri { get; set; }
+ }
+ ```
+
+
+
+## Defining an agent configuration
+
+To create an AI agent you need to prepare an **agent configuration** and populate it with
+your settings and tools.
+
+Start by creating a new `AiAgentConfiguration` instance.
+While creating the instance, pass its constructor:
+
+- The agent's Name
+- The [connection string](../../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_api#creating-a-connection-string) you created
+- A System prompt
+
+The agent will send the system prompt you define here to the LLM to define its basic characteristics, including its role, purpose, behavior, and the tools it can use.
+
+* **Example**
+ ```csharp
+ // Start setting an agent configuration
+ var agent = new AiAgentConfiguration("reward-productive-employee", connectionString.Name,
+ @"You work for a human experience manager.
+ The manager uses your services to find which employee has made the largest profit and to suggest
+ a reward.
+ The manager provides you with the name of a country, or with the word ""everything"" to indicate
+ all countries.
+ Then you:
+ 1. use a query tool to load all the orders sent to the selected country,
+ or a query tool to load all orders sent to all countries.
+ 2. calculate which employee made the largest profit.
+ 3. use a query tool to learn in what general area this employee lives.
+ 4. find suitable vacations sites or other rewards based on the employee's residence area.
+ 5. use an action tool to store in the database the employee's ID, profit, and your reward suggestions.
+ When you're done, return these details in your answer to the user as well.");
+ ```
+
+* `AiAgentConfiguration` constructor
+ ```csharp
+ public AiAgentConfiguration(string name, string connectionStringName, string systemPrompt);
+ ```
+
+* `AiAgentConfiguration` class
+ ```csharp
+ public class AiAgentConfiguration
+ {
+ // A unique identifier given to the AI agent configuration
+ public string Identifier { get; set; }
+
+ // The name of the AI agent configuration
+ public string Name { get; set; }
+
+ // Connection string name
+ public string ConnectionStringName { get; set; }
+
+ // The system prompt that defines the role and purpose of the agent and the LLM
+ public string SystemPrompt { get; set; }
+
+ // An example object that sets the layout for the LLM's response to the user.
+ // The object is translated to a schema before it is sent to the LLM.
+ public string SampleObject { get; set; }
+
+ // A schema that sets the layout for the LLM's response to the user.
+ // If both a sample object and a schema are defined, only the schema is used.
+ public string OutputSchema { get; set; }
+
+ // A list of Query tools that the LLM can use (through the agent) to access the database
+ public List Queries { get; set; } = new List();
+
+ // A list of Action tools that the LLM can use to trigger the user to action
+ public List Actions { get; set; } = new List();
+
+ // Agent parameters whose value the client passes to the LLM each time a chat is started,
+ // for stricter control over queries initiated by the LLM and as a means for interaction
+ // between the client and the LLM.
+ public List Parameters { get; set; } = new List();
+
+ // The trimming configuration defines if and how the conversation is summarized,
+ // to minimize the amount of data passed to the LLM when a conversation is started.
+ public AiAgentChatTrimmingConfiguration ChatTrimming { get; set; } = new
+ AiAgentChatTrimmingConfiguration(new AiAgentSummarizationByTokens());
+
+ // Control over the number of times that the LLM is allowed to use agent tools to handle
+ // a user prompt.
+ public int? MaxModelIterationsPerCall { get; set; }
+ }
+ ```
+
+Once the initial agent configuration is created, we need to add it a few additional elements.
+
+### Set the agent ID:
+Use the `Identifier` property to provide the agent with a unique ID that the
+system will recognize it by.
+
+```csharp
+// Set agent ID
+agent.Identifier = "reward-productive-employee";
+```
+
+### Define a response object:
+Define a [structured output](https://platform.openai.com/docs/guides/structured-outputs) response object that the LLM will populate with its response to the user.
+
+To define the response object, you can use the `SampleObject` and/or the `OutputSchema` property
+* `SampleObject` is a straightforward sample of the response object that you expect the LLM to return.
+ It is usually simpler to define the response object this way.
+* `OutputSchema` is a formal JSON schema that the LLM can understand.
+ Even when defining the response object as a `SampleObject`, RavenDB will translate the object to a JSON schema before sending it to the LLM. If you prefer it however, you can explicitly define it as a schema yourself.
+* If you define both a sample object and a schema, the agent will send only the schema to the LLM.
+
+
+
+```csharp
+// Set sample object
+agent.SampleObject = "{" +
+ "\"suggestedReward\": \"your suggestions for a reward\", " +
+ "\"employeeId\": \"the ID of the employee that made the largest profit\", " +
+ "\"profit\": \"the profit the employee made\"" +
+ "}";
+```
+
+
+```csharp
+// Set output schema
+agent.OutputSchema = "{" +
+ "\"name\": \"RHkxaWo5ZHhMM1RuVnIzZHhxZm9vM0c0UnYrL0JWbkhyRDVMd0tJa1g4Yz0\", " +
+ "\"strict\": true, " +
+ "\"schema\": {" +
+ "\"type\": \"object\", " +
+ "\"properties\": {" +
+ "\"employeeID\": {" +
+ "\"type\": \"string\", " +
+ "\"description\": \"the ID of the employee that made the largest profit\"" +
+ "}, " +
+ "\"profit\": {" +
+ "\"type\": \"string\", " +
+ "\"description\": \"the profit the employee made\"" +
+ "}, " +
+ "\"suggestedReward\": {" +
+ "\"type\": \"string\", " +
+ "\"description\": \"your suggestions for a reward\"" +
+ "}" +
+ "}, " +
+ "\"required\": [" +
+ "\"employeeID\", " +
+ "\"profit\", " +
+ "\"suggestedReward\"" +
+ "], " +
+ "\"additionalProperties\": false" +
+ "}" +
+ "}";
+```
+
+
+
+### Add agent parameters:
+Agent parameters are parameters that can be used by [query tools](../../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_api#query-tools) when the agent queries the database on behalf of the LLM.
+Values for agent parameters are provided by the client, or by a user through the client,
+when a chat is started.
+When the agent is requested to use a query tool that uses agent parameters, it replaces these parameters with the values provided by the user before running the query.
+Using agent parameters allows the client to focus the queries and the entire interaction on its current needs.
+
+In the example below, an agent parameter is used to determine what area
+of the world a query will handle.
+
+To add an agent parameter create an `AiAgentParameter` instance, initialize it with
+the parameter's **name** and **description** (explaining to the LLM what the parameter
+is for), and pass this instance to the `agent.Parameters.Add` method.
+
+* **Example**
+ ```csharp
+ // Set agent parameters
+ agent.Parameters.Add(new AiAgentParameter(
+ "country", "A specific country that orders were shipped to, " +
+ "or \"everywhere\" to look for orders shipped to all countries"));
+ ```
+
+* `AiAgentParameter` definition
+ ```csharp
+ public AiAgentParameter(string name, string description);
+ ```
+
+### Set maximum number of iterations:
+You can limit the number of times that the LLM is allowed to request the usage of
+agent tools in response to a single user prompt. Use `MaxModelIterationsPerCall` to change this limit.
+
+* **Example**
+ ```csharp
+ // Limit the number of times the LLM can request for tools in response to a single user prompt
+ agent.MaxModelIterationsPerCall = 3;
+ ```
+
+* `MaxModelIterationsPerCall` Definition
+ ```csharp
+ public int? MaxModelIterationsPerCall
+ ```
+
+
+Note that you can improve the TTFB (Time To First Byte) by getting the LLM's response in chunks using streaming.
+Find more about streaming in the [overview](../../../ai-integration/ai-agents/ai-agents_overview#streaming-llm-responses) and [below](../../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_api#stream-llm-responses).
+
+
+### Set chat trimming configuration:
+
+To [summarize the conversation](../../../ai-integration/ai-agents/ai-agents_overview#define-a-chat-trimming-configuration), create an `AiAgentChatTrimmingConfiguration` instance,
+use it to configure your trimming strategy, and set the agent's `ChatTrimming` property
+with the instance.
+
+When creating the instance, pass its constructor a summarization strategy using
+a `AiAgentSummarizationByTokens` class.
+
+The original conversation, before it was summarized, can optionally be
+kept in the `@conversations-history` collection.
+To determine whether to keep the original messages and for how long, also pass the
+`AiAgentChatTrimmingConfiguration` constructor an `AiAgentHistoryConfiguration` instance
+with your settings.
+
+* **Example**
+ ```csharp
+ // Set chat trimming configuration
+ AiAgentSummarizationByTokens summarization = new AiAgentSummarizationByTokens()
+ {
+ // When the number of tokens stored in the conversation exceeds this limit
+ // summarization of old messages will be triggered.
+ MaxTokensBeforeSummarization = 32768,
+ // The maximum number of tokens that the conversation is allowed to contain
+ // after summarization.
+ MaxTokensAfterSummarization = 1024
+ };
+ agent.ChatTrimming = new AiAgentChatTrimmingConfiguration(summarization);
+ ```
+
+* **Syntax**
+ ```csharp
+ public class AiAgentSummarizationByTokens
+ {
+ // The maximum number of tokens allowed before summarization is triggered.
+ public long? MaxTokensBeforeSummarization { get; set; }
+
+ // The maximum number of tokens allowed in the generated summary.
+ public long? MaxTokensAfterSummarization { get; set; }
+ }
+
+ public class AiAgentHistoryConfiguration
+ {
+ // Enables history for AI agents conversations.
+ public AiAgentHistoryConfiguration()
+
+ // Enables history for AI agents conversations,
+ // with `expiration` determining the timespan after which history documents expire.
+ public AiAgentHistoryConfiguration(TimeSpan expiration)
+
+ // The timespan after which history documents expire.
+ public int? HistoryExpirationInSec { get; set; }
+ }
+ ```
+
+## Adding agent tools
+
+You can enhance your agent with Query and Action tools, that allow the LLM to query your database and trigger client actions.
+After defining agent tools and submitting them to the LLM, it is up to the LLM to decide if and when to use them.
+
+### Query tools:
+
+[Query tools](../../../ai-integration/ai-agents/ai-agents_overview#query-tools) provide the LLM with the ability to retrieve data from the database.
+A query tool includes a natural-language **description** that explains the LLM what the tool is for, and an **RQL query**.
+
+* **Passing values to query tools**
+ * Query tools optionally include [parameters](../../../ai-integration/ai-agents/ai-agents_overview#query-parameters), identified by a `$` prefix.
+ Both the user and the LLM can pass values to these parameters.
+ * **Passing values from the user**
+ Users can pass values to queries through **agent parameters**.
+ If agent parameters are defined in the agent configuration -
+ * The client has to provide values for them when initiating a conversation with the agent.
+ * The parameters can be included in query tools RQL queries.
+ Before running a query, the agent will replace any agent parameter included in it with its value.
+ * **Passing values from the LLM**
+ The LLM can pass values to queries through a **parameters schema**.
+ * The parameters schema layout is defined as part of the query tool.
+ * When the LLM requests the agent to run a query, it will add parameter values to the request.
+ * You can define a parameters schema either as a **sample object** or a **formal JSON schema**.
+ If you define both, the LLM will pass parameter values only through the JSON schema.
+ * Before running a query, the agent will replace any parameter included in it with its value.
+
+* **Example**
+ * The first query tool will be used by the LLM when it needs to retrieve all the
+ orders sent to any place in the world. (the system prompt instructs it to use this
+ tool when the user enters "everywhere" when the conversation is started.)
+ * The second query tool will be used by the LLM when it needs to retrieve all the
+ orders that were sent to a particular country, using the `$country` agent parameter.
+ * The third tool retrieves from the database the general location of an employee.
+ To do this it uses a `$employeeId` parameter, whose value is set by the LLM in its
+ request to run this tool.
+
+ ```csharp
+ agent.Queries =
+ [
+ // Set a query tool that triggers the agent to retrieve all the orders sent everywhere
+ new AiAgentToolQuery
+ {
+ // Query tool name
+ Name = "retrieve-orders-sent-to-all-countries",
+
+ // Query tool description
+ Description = "a query tool that allows you to retrieve all orders sent to all countries.",
+
+ // Query tool RQL query
+ Query = "from Orders as O select O.Employee, O.Lines.Quantity",
+
+ // Sample parameters object for the query tool
+ // The LLM can use this object to pass parameters to the query tool
+ ParametersSampleObject = "{}"
+ },
+
+ // Set a query tool that triggers the agent to retrieve all the orders sent to a
+ // specific country
+ new AiAgentToolQuery
+ {
+ Name = "retrieve-orders-sent-to-a-specific-country",
+ Description = "a query tool that allows you to retrieve all orders sent " +
+ "to a specific country",
+ Query = "from Orders as O where O.ShipTo.Country == $country select O.Employee, " +
+ "O.Lines.Quantity",
+ ParametersSampleObject = "{}"
+ },
+
+ // Set a query tool that triggers the agent to retrieve the performer's
+ // residence region details (country, city, and region) from the database
+ new AiAgentToolQuery
+ {
+ Name = "retrieve-performer-living-region",
+ Description = "a query tool that allows you to retrieve an employee's country, " +
+ "city, and region, by the employee's ID",
+ Query = "from Employees as E where id() == $employeeId select E.Address.Country, " +
+ "E.Address.City, E.Address.Region",
+ ParametersSampleObject = "{" +
+ "\"employeeId\": \"embed the employee's ID here\"" +
+ "}"
+ }
+ ];
+ ```
+
+* **Syntax**
+ Query tools are defined in a list of `AiAgentToolQuery` classes.
+ ```csharp
+ public class AiAgentToolQuery
+ {
+ public string Name { get; set; }
+ public string Description { get; set; }
+ public string Query { get; set; }
+ public string ParametersSampleObject { get; set; }
+ public string ParametersSchema { get; set; }
+ }
+ ```
+
+#### Initial-context queries
+
+* You can set a query tool as an [initial-context query](../../../ai-integration/ai-agents/ai-agents_overview#initial-context-queries) using its `Options.AddToInitialContext` property, to execute the query and provide the LLM with its results immediately when the agent is started.
+ * An initial-context query is **not allowed** to use LLM parameters, since the query
+ runs before the conversation starts, earlier than the first communication with the LLM, and the LLM will have no opportunity to fill the parameters with values.
+ * An initial-context query **is** allowed to use agent parameters, whose values are provided by the user even before the query is executed.
+
+* You can use the `Options.AllowModelQueries` property to Enable or Disable a query tool .
+ * When a query tool is enabled, the LLM can freely trigger its execution.
+ * When a query tool is disabled, the LLM cannot trigger its execution.
+ * If a query tool is set as an initial-context query, it will be executed when the conversation
+ starts even if disabled using `AllowModelQueries`.
+
+* **Example**
+ Set a query tool that runs when the agent is started and retrieves all the orders sent everywhere.
+ ```csharp
+ new AiAgentToolQuery
+ {
+ Name = "retrieve-orders-sent-to-all-countries",
+ Description = "a query tool that allows you to retrieve all orders sent to all countries.",
+ Query = "from Orders as O select O.Employee, O.Lines.Quantity",
+ ParametersSampleObject = "{}"
+
+ Options = new AiAgentToolQueryOptions
+ {
+ // The LLM is allowed to trigger the execution of this query during the conversation
+ AllowModelQueries = true,
+
+ // The query will be executed when the conversation starts
+ // and its results will be added to the initial context
+ AddToInitialContext = true
+ }
+ }
+ ```
+
+* **Syntax**
+ ```csharp
+ public class AiAgentToolQueryOptions : IDynamicJson
+ {
+ public bool? AllowModelQueries { get; set; }
+ public bool? AddToInitialContext { get; set; }
+ }
+ ```
+
+ |Property|Type|Description|
+ |--------|----|-----------|
+ |`AllowModelQueries`|`bool`| `true`: the LLM can trigger the execution of this query tool. `false`: the LLM cannot trigger the execution of this query tool. `null`: server-side defaults apply.|
+ |`AddToInitialContext`|`bool`| `true`: the query will be executed when the conversation starts and its results added to the initial context. `false`: the query will not be executed when the conversation starts. `null`: server-side defaults apply.|
+
+
+ Note: the two flags can be set regardless of each other.
+ * Setting `AddToInitialContext` to `true` and `AllowModelQueries` to `false`
+ will cause the query to be executed when the conversation starts,
+ but the LLM will not be able to trigger its execution later in the conversation.
+ * Setting `AddToInitialContext` to `true` and `AllowModelQueries` to `true`
+ will cause the query to be executed when the conversation starts,
+ and the LLM will also be able to trigger its execution later in the conversation.
+
+
+### Action tools:
+
+Action tools allow the LLM to trigger the client to action (e.g., to modify or add a document).
+An action tool includes a natural-language **description** that explains the LLM what the tool is capable of, and a **schema** that the LLM will fill with details related to the requested action before sending it to the agent.
+
+In the example below, the action tool requests the client to store an employee's details
+in the database. The LLM will provide the employee's ID and other details whenever it requests the agent
+to apply the tool.
+
+When the client finishes performing the action, it is required to send the LLM
+a response that explains how it went, e.g. `done`.
+
+* **Example**
+ The following action tool sends to the client employee details that the tool needs to store in the database.
+ ```csharp
+ agent.Actions =
+ [
+ // Set an action tool that triggers the client to store the performer's details
+ new AiAgentToolAction
+ {
+ Name = "store-performer-details",
+ Description = "an action tool that allows you to store the ID of the employee that made " +
+ "the largest profit, the profit, and your suggestions for a reward, in the " +
+ "database.",
+ ParametersSampleObject = "{" +
+ "\"suggestedReward\": \"embed your suggestions for a reward here\", " +
+ "\"employeeId\": \"embed the employee’s ID here\", " +
+ "\"profit\": \"embed the employee’s profit here\"" +
+ "}"
+ }
+ ];
+ ```
+
+* **Syntax**
+ Action tools are defined in a list of `AiAgentToolAction` classes.
+ ```csharp
+ public class AiAgentToolAction
+ {
+ public string Name { get; set; }
+ public string Description { get; set; }
+ public string ParametersSampleObject { get; set; }
+ public string ParametersSchema { get; set; }
+ }
+ ```
+
+## Creating the Agent
+
+The agent configuration is ready, and we can now register the agent on the server
+using the `CreateAgent` method.
+
+* Create a response object class that matches the response schema defined in your agent configuration.
+* Call `CreateAgent` and pass it -
+ * The agent configuration
+ * A new instance of the response object class
+
+* **Example**
+ ```csharp
+ // Create the agent
+ // Pass it an object for its response
+ var createResult = await store.AI.CreateAgentAsync(agent, new Performer
+ {
+ suggestedReward = "your suggestions for a reward",
+ employeeId = "the ID of the employee that made the largest profit",
+ profit = "the profit the employee made"
+ });
+
+ // An object for the LLM response
+ public class Performer
+ {
+ public string suggestedReward;
+ public string employeeId;
+ public string profit;
+ }
+ ```
+
+* `CreateAgent` overloads
+ ```csharp
+ // Asynchronously creates or updates an AI agent configuration on the database,
+ // with the given schema as an example for a response object
+ Task CreateAgentAsync(AiAgentConfiguration configuration, TSchema sampleObject, CancellationToken token = default)
+
+ // Creates or updates (synchronously) an AI agent configuration on the database
+ AiAgentConfigurationResult CreateAgent(AiAgentConfiguration configuration)
+
+ // Asynchronously creates or updates an AI agent configuration on the database
+ Task CreateAgentAsync(AiAgentConfiguration configuration, CancellationToken token = default)
+
+ // Creates or updates (synchronously) an AI agent configuration on the database,
+ // with the given schema as an example for a response object
+ AiAgentConfigurationResult CreateAgent(AiAgentConfiguration configuration, TSchema sampleObject) where TSchema : new()
+ ```
+
+ | Property | Type | Description |
+ |----------|------|-------------|
+ | configuration | `AiAgentConfiguration` | The agent configuration |
+ | sampleObject | `TSchema` | Example response object |
+
+ | Return value | Description |
+ |--------------|-------------|
+ | `AiAgentConfigurationResult` | The result of the agent configuration creation or update, including the agent's ID. |
+
+
+
+## Retrieving existing agent configurations
+
+You can retrieve the configuration of **an existing agent** using `GetAgent`.
+
+* **Example**
+ ```csharp
+ // Retrieve an existing agent configuration by its ID
+ var existingAgent = store.AI.GetAgent("reward-productive-employee");
+ ```
+
+You can also retrieve the configurations of **all existing agents** using `GetAgents`.
+
+* **Example**
+ ```csharp
+ // Extract the agent configurations from the response into a new list
+ var existingAgentsList = store.AI.GetAgents();
+ var agents = existingAgentsList.AiAgents;
+ ```
+
+* `GetAgent` and `GetAgents` overloads
+ ```csharp
+ // Synchronously retrieves the configuration of an AI agent by its ID
+ AiAgentConfiguration GetAgent(string agentId)
+
+ // Asynchronously retrieves the configuration of an AI agent by its ID
+ async Task GetAgentAsync(string agentId, CancellationToken token = default)
+
+ // Synchronously retrieves the configurations of all AI agents
+ GetAiAgentsResponse GetAgents()
+
+ // Asynchronously retrieves the configurations of all AI agents
+ Task GetAgentsAsync(CancellationToken token = default)
+ ```
+
+ | Property | Type | Description |
+ |----------|------|-------------|
+ | agentId | `string` | The unique ID of the agent you want to retrieve |
+
+ | Return value | Description |
+ |--------------|-------------|
+ | `AiAgentConfiguration` | The agent configuration |
+ | `GetAiAgentsResponse` | The response containing a list of all agent configurations |
+ ```
+
+* `GetAiAgentsResponse` class
+ ```csharp
+ public class GetAiAgentsResponse
+ {
+ public List AiAgents { get; set; }
+ }
+ ```
+
+
+
+## Managing conversations
+
+### Setting a conversation:
+
+* Set a conversation using the `store.AI.Conversation` method.
+ Pass `Conversation`:
+ * The **agent ID**
+ * The **conversation ID**
+ The conversation ID that you provide when starting a conversation determines whether a new conversation will start, or an existing conversation will be continued.
+ * Conversations are kept in the `@conversations` collection.
+ A conversation document's name starts with a prefix (such as `Chats/`) that can be
+ set when the conversation is initiated.
+ * You can -
+ **Provide a full ID**, including a prefix and the ID that follows it.
+ **Provide a prefix that ends with `/` or `|`** to trigger automatic ID creation,
+ similarly to the creation of automatic IDs for documents.
+ * If you pass the method the ID of an existing conversation (e.g. `"Chats/0000000000000008883-A"`)
+ the conversation will be retrieved from storage and continued where you left off.
+ * If you provide an empty prefix (e.g. `"Chats/`), a new conversation will start.
+ * Values for **agent parameters**, if defined, in an `AiConversationCreationOptions` instance.
+* Set the user prompt using the `SetUserPrompt`method.
+ The user prompt informs the agent of the user's requests and expectations for this chat.
+* Use the value returned by the `Conversation` method to run the chat.
+
+* **Example**
+ ```csharp
+ // Create a conversation instance
+ // Initialize it with -
+ // The agent's ID,
+ // A prefix (Performers/) for conversations stored in the @Conversations collection,
+ // Agent parameters' values
+ var chat = store.AI.Conversation(
+ createResult.Identifier,
+ "Performers/",
+ new AiConversationCreationOptions().AddParameter("country", "France"));
+ ```
+
+* `Conversation` Definition
+ ```csharp
+ public IAiConversationOperations Conversation(string agentId, string conversationId, AiConversationCreationOptions creationOptions, string changeVector = null)
+ ```
+
+ | Property | Type | Description |
+ |----------|------|-------------|
+ | agentId | `string` | The agent unique ID |
+ | conversationId | `string` | The [conversation ID](../../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_api#setting-a-conversation) |
+ | creationOptions | `AiConversationCreationOptions` | Conversation creation options (see class definition below) |
+ | changeVector | `string` | Optional change vector for concurrency control |
+
+ | Return value | Description |
+ |--------------|-------------|
+ | `IAiConversationOperations` | The conversation operations interface for conversation management. Methods of this interface like `Run`, `StreamAsync`, `Handle`, and others, allow you send messages, receive responses, handle action tools, and manage various other aspects of the conversation lifecycle. |
+
+* `SetUserPrompt` Definition
+ ```csharp
+ void SetUserPrompt(string userPrompt);
+ ```
+* `AiConversationCreationOptions` class
+ Use this class to set conversation creation options, including values for agent parameters and the conversation's expiration time if it remains idle.
+ ```csharp
+ // Conversation creation options, including agent parameters and idle expiration configuration
+ public class AiConversationCreationOptions
+ {
+ // Values for agent parameters defined in the agent configuration
+ // Used to provide context or input values at the start of the conversation
+ public Dictionary Parameters { get; set; }
+
+ // Optional expiration time (in seconds)
+ // If the conversation is idle for longer than this, it will be automatically deleted
+ public int? ExpirationInSec { get; set; }
+
+ // Initializes a new conversation instance with no parameters
+ // Use when you want to configure conversation options incrementally
+ public AiConversationCreationOptions();
+
+ // Initializes a new conversation instance and passes it a set of parameter values
+ public AiConversationCreationOptions(Dictionary parameters);
+
+ // Adds an agent parameter value for this conversation
+ // Returns the current instance to allow method chaining
+ public AiConversationCreationOptions AddParameter(string name, object value);
+ }
+ ```
+
+### Processing action-tool requests:
+During the conversation, the LLM can request the agent to trigger action tools.
+The agent will pass a requested action tool's name and parameters to the client,
+and it is then up to the client to process the request.
+
+The client can process an action-tool request using a [handler](../../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_api#action-tool-handlers) or a [receiver](../../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_api#action-tool-receivers).
+
+#### Action-tool Handlers
+A **handler** is created for a specific action tool and registered with the server using the `Handle` method.
+When the LLM triggers this action tool with an action request, the handler is invoked to process the request, returns a response to the LLM, and ends automatically.
+
+
+Handlers are typically used for simple, immediate operations like storing a document in the database and returning a confirmation, performing a quick calculation and sending its results, and other scenarios where the response can be generated and returned in a single step.
+
+
+* To **create a handler**,
+ pass the `Handle` method -
+ * The action tool's name.
+ * An object to populate with the data sent with the action request.
+ Make sure that the object has the same structure defined for the action tool's parameters schema.
+
+* When an **action request for this tool is received**,
+ the handler will be given -
+ * The populated object with the data sent with the action request.
+
+* When you **finish handling the requested action**,
+ `return` a response that will be sent by the agent back to the LLM.
+
+* **Example**
+ In this example, the action tool is requested to store an employee's details in the database.
+ ```csharp
+ // "store-performer-details" action tool handler
+ chat.Handle("store-performer-details", (Performer performer) =>
+ {
+ using (var session = store.OpenSession())
+ {
+ // store the values in the Performers collection in the database
+ session.Store(performer);
+ session.SaveChanges();
+ }
+
+ // return to the agent an indication that the action went well.
+ return "done";
+ });
+
+ // An object that represents the arguments provided by the LLM for this tool call
+ public class Performer
+ {
+ public string suggestedReward;
+ public string employeeId;
+ public string profit;
+ }
+ ```
+* `Handle` overloads
+ ```csharp
+ void Handle(string actionName, Func> action, AiHandleErrorStrategy aiHandleError = AiHandleErrorStrategy.SendErrorsToModel)
+
+ void Handle(string actionName, Func action, AiHandleErrorStrategy aiHandleError = AiHandleErrorStrategy.SendErrorsToModel) where TArgs : class;
+
+ void Handle(string actionName, Func> action, AiHandleErrorStrategy aiHandleError = AiHandleErrorStrategy.SendErrorsToModel)
+
+ void Handle(string actionName, Func action, AiHandleErrorStrategy aiHandleError = AiHandleErrorStrategy.SendErrorsToModel) where TArgs : class;
+ ```
+
+ | Property | Type | Description |
+ |----------|------|-------------|
+ | actionName | `string` | The action tool name |
+ | action | `Func>` or `Func` or `Func>` or `Func` | The handler function that processes the action request and returns a response to the LLM |
+ | aiHandleError | `AiHandleErrorStrategy` | Errors handling strategy. `SendErrorsToModel` - Send errors to the model for handling. `RaiseImmediately` - throw error exceptions.|
+
+#### Action-tool Receivers
+A **receiver** is created for a specific action tool and registered with the server using the `Receive` method.
+When the LLM triggers this action tool with an action request, the receiver is invoked to process the request, but unlike a handler, the receiver remains active until `AddActionResponse` is explicitly called to close the pending request and send a response to the LLM.
+
+
+Receivers are typically used asynchronously for multi-step or delayed operations such as waiting for an external event or for user input before responding, performing long-running operations like batch processing or integration with an external system, and other use cases where the response cannot be generated immediately.
+
+
+* To **create a receiver**,
+ pass the `Receive` method -
+ * The action tool's name.
+ * An object to populate with the data sent with the action request.
+ Make sure that this object has the same structure defined for the action tool's parameters schema.
+
+* When an **action request for this tool is received**,
+ the receiver will be given -
+ * An `AiAgentActionRequest` object containing the details of the action request.
+ * The populated object with the data sent with the action request.
+
+* When you **finish handling the requested action**,
+ call `AddActionResponse`. Pass it -
+ * The action tool's ID.
+ * The response to send back to the LLM.
+
+ Note that the response can be sent at any time, even after the receiver has finished executing,
+ and from any context, not necessarily from within the receiver callback.
+
+
+* **Example**
+ In this example, a receiver gets a recommendation for rewards that can be given to a performant employee and processes it.
+
+
+ ```csharp
+ chat.Receive("store-performer-details", async (AiAgentActionRequest request, Performer performer) =>
+ {
+ // Perform asynchronous work
+ using (var session = store.OpenAsyncSession())
+ {
+ await session.StoreAsync(performer);
+ await session.SaveChangesAsync();
+ }
+
+ // Example: Send a notification email asynchronously
+ await EmailService.SendNotificationAsync("manager@company.com", performer);
+
+ // Manually send the response to close the action
+ chat.AddActionResponse(request.ToolId, "done");
+ });
+ ```
+
+
+ ```csharp
+ chat.Receive("store-performer-details", (AiAgentActionRequest request, Performer performer) =>
+ {
+ // Perform synchronous work
+ using (var session = store.OpenSession())
+ {
+ session.Store(performer);
+ session.SaveChanges();
+ }
+
+ // Add any processing logic here
+
+ // Manually send the response and close the action
+ chat.AddActionResponse(request.ToolId, "done");
+ });
+ ```
+
+
+
+* `Receive` overloads
+ ```csharp
+ // Registers an Asynchronous receiver for an action tool
+ void Receive(string actionName, Func action, AiHandleErrorStrategy aiHandleError = AiHandleErrorStrategy.SendErrorsToModel)
+
+ // Registers a Synchronous receiver for an action tool
+ void Receive(string actionName, Action action, AiHandleErrorStrategy aiHandleError = AiHandleErrorStrategy.SendErrorsToModel)
+ ```
+
+ | Property | Type | Description |
+ |----------|------|-------------|
+ | actionName | `string` | The action tool name |
+ | action | `Func` or `Action` | The receiver function that processes the action request |
+ | aiHandleError | `AiHandleErrorStrategy` | Errors handling strategy. `SendErrorsToModel` - Send errors to the model for handling. `RaiseImmediately` - throw error exceptions.|
+
+* `AddActionResponse` Definition
+ ```csharp
+ // Closes the action request and sends the response back to the LLM
+ void AddActionResponse(string toolId, string actionResponse)
+ ```
+
+ | Property | Type | Description |
+ |----------|------|-------------|
+ | toolId | `string` | The action request unique ID |
+ | actionResponse | `string` | The response to send back to the LLM through the agent |
+
+
+* `AiAgentActionRequest` class
+ Contains the action request details, sent by the LLM to the agent and passed to the receiver when invoked.
+ ```csharp
+ public class AiAgentActionRequest
+ {
+ // Action tool name
+ public string Name;
+
+ // Action tool unique ID
+ public string ToolId;
+
+ // Request arguments provided by the LLM
+ public string Arguments;
+ }
+ ```
+
+### Conversation response:
+
+The LLM response is returned by the agent to the client in an `AiAnswer` object, with an answer to the user prompt and the conversation status, indicating whether the conversation is complete or a further "turn" is required.
+
+* `AiAnswer`syntax
+ ```csharp
+ public class AiAnswer
+ {
+ // The answer content produced by the AI
+ public TAnswer Answer;
+
+ // The status of the conversation
+ public AiConversationResult Status;
+ }
+
+ public enum AiConversationResult
+ {
+ // The conversation has completed and a final answer is available
+ Done,
+ // Further interaction is required, such as responding to tool requests
+ ActionRequired
+ }
+ ```
+
+### Setting user prompt and running the conversation:
+
+Set the user prompt using the `SetUserPrompt` method, and run the conversation using the
+`RunAsync` method.
+
+You can also use `StreamAsync` to **stream** the LLM's response as it is generated.
+Learn how to do this in the [Stream LLM responses](../../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_api#stream-llm-responses) section.
+
+
+```csharp
+// Set the user prompt and run the conversation
+chat.SetUserPrompt("send a few suggestions to reward the employee that made the largest profit");
+
+var LLMResponse = await chat.RunAsync(CancellationToken.None);
+
+if (LLMResponse.Status == AiConversationResult.Done)
+{
+ // The LLM successfully processed the user prompt and returned its response.
+ // The performer's ID, profit, and suggested rewards were stored in the Performers
+ // collection by the action tool, and are also returned in the final LLM response.
+}
+```
+
+See the full example [below](../../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_api#full-example).
+
+
+
+## Stream LLM responses
+
+You can set the agent to [stream the LLM's response to the client](../../../ai-integration/ai-agents/ai-agents_overview#streaming-llm-responses) in real time as the LLM generates it, using the `StreamAsync` method, instead of using [RunAsync](../../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_api#setting-user-prompt-and-running-the-conversation) which sends the whole response to the client when it is fully prepared.
+
+Streaming the response allows the client to start processing it before it is complete, which can improve the application's responsiveness.
+
+* **Example**
+ ```csharp
+ // A StringBuilder, used in this example to collect the streamed response
+ var reward = new StringBuilder();
+
+ // Using StreamAsync to collect the streamed response
+ // The response property to stream is in this case `suggestedReward`
+ var LLMResponse = await chat.StreamAsync(responseObj => responseObj.suggestedReward, str =>
+ {
+ // Callback invoked with the arrival of each incoming chunk of the processed property
+
+ reward.Append(str); // Add the incoming chunk to the StringBuilder instance
+ return Task.CompletedTask; // Return with an indication that the chunk was processed
+
+ }, CancellationToken.None);
+
+ if (LLMResponse.Status == AiConversationResult.Done)
+ {
+ // Handle the full response when ready
+
+ // The streamed property was fully loaded and handled by the callback above,
+ // remaining parts of the response (including other properties if exist)
+ // will arrive when the whole response is ready and can be handled here.
+ }
+ ```
+
+* `StreamAsync` overloads:
+
+ ```csharp
+ // The property to stream is indicated using a lambda expression
+ Task> StreamAsync
+ (Expression> streamPropertyPath,
+ Func streamedChunksCallback, CancellationToken token = default);
+ ```
+
+ ```csharp
+ // The property to stream is indicated as a string, using its name
+ Task> StreamAsync
+ (string streamPropertyPath,
+ Func streamedChunksCallback, CancellationToken token = default);
+ ```
+
+ | Property | Type | Description |
+ |----------|------|-------------|
+ | streamPropertyPath | `Expression>` | A lambda expression that selects the property to stream from the response object.
**The selected property must be a simple string** (and not a JSON object or an array, for example).
It is recommended that this would be the first property defined in the response schema. The LLM processes the properties in the order they are defined. Streaming the first property will ensure that streaming to the user starts immediately even if it takes the LLM time to process later properties.
|
+ | streamPropertyPath | `string` | The name of the property in the response object to stream.
**The selected property must be a simple string** (and not a JSON object or an array, for example).
It is recommended that this would be the first property defined in the response schema. The LLM processes the properties in the order they are defined. Streaming the first property will ensure that streaming to the user starts immediately even if it takes the LLM time to process later properties.
|
+ | streamedChunksCallback | `Func` | A callback function that is invoked with each incoming chunk of the streamed property |
+ | token | `CancellationToken` | An optional token that can be used to cancel the streaming operation |
+
+ | Return value | Description |
+ |--------------|-------------|
+ | `Task>` | After streaming the specified property, the return value contains the final conversation result and status (e.g. "Done" or "ActionRequired"). |
+
+
+
+## Full example
+
+The agent's user in this example is a human experience manager.
+The agent helps its user to reward employees by searching, using query tools,
+for orders sent to a certain country or (if the user prompts it "everywhere")
+to all countries, and finding the employee that made the largest profit.
+The agent then runs another query tool to find, by the employee's ID (that
+was fetched from the retrieved orders) the employee's residence region,
+and finds rewards suitable for the employee based on this region.
+Finally, it uses an action tool to store the employee's ID, profit, and reward
+suggestions in the `Performers` collection in the database, and returns the same
+details in its final response as well.
+
+```csharp
+public async Task createAndRunAiAgent_full()
+{
+ var store = new DocumentStore();
+
+ // Define connection string to OpenAI
+ var connectionString = new AiConnectionString
+ {
+ Name = "open-ai-cs",
+ ModelType = AiModelType.Chat,
+ OpenAiSettings = new OpenAiSettings(
+ apiKey: "your-api-key",
+ endpoint: "https://api.openai.com/v1",
+ // LLM model for text generation
+ model: "gpt-4.1")
+ };
+
+ // Deploy connection string to server
+ var operation = new PutConnectionStringOperation(connectionString);
+ var putConnectionStringResult = store.Maintenance.Send(operation);
+
+ using var session = store.OpenAsyncSession();
+
+ // Start setting an agent configuration
+ var agent = new AiAgentConfiguration("reward-productive-employee", connectionString.Name,
+ @"You work for a human experience manager.
+ The manager uses your services to find which employee has made the largest profit and to suggest
+ a reward.
+ The manager provides you with the name of a country, or with the word ""everything"" to indicate
+ all countries.
+ Then you:
+ 1. use a query tool to load all the orders sent to the selected country,
+ or a query tool to load all orders sent to all countries.
+ 2. calculate which employee made the largest profit.
+ 3. use a query tool to learn in what general area this employee lives.
+ 4. find suitable vacations sites or other rewards based on the employee's residence area.
+ 5. use an action tool to store in the database the employee's ID, profit, and your reward suggestions.
+ When you're done, return these details in your answer to the user as well.");
+
+ // Set agent ID
+ agent.Identifier = "reward-productive-employee";
+
+ // Define LLM response object
+ agent.SampleObject = "{" +
+ "\"EmployeeID\": \"embed the employee’s ID here\"," +
+ "\"Profit\": \"embed the profit made by the employee here\"," +
+ "\"SuggestedReward\": \"embed suggested rewards here\"" +
+ "}";
+
+ // Set agent parameters
+ agent.Parameters.Add(new AiAgentParameter(
+ "country", "A specific country that orders were shipped to, " +
+ "or \"everywhere\" to look for orders shipped to all countries"));
+
+ agent.Queries =
+ [
+ // Set a query tool to retrieve all orders sent everywhere
+ new AiAgentToolQuery
+ {
+ // Query tool name
+ Name = "retrieve-orders-sent-to-all-countries",
+
+ // Query tool description
+ Description = "a query tool that allows you to retrieve all orders sent to all countries.",
+
+ // Query tool RQL query
+ Query = "from Orders as O select O.Employee, O.Lines.Quantity",
+
+ // Sample parameters object
+ ParametersSampleObject = "{}"
+ },
+
+ // Set a query tool to retrieve all orders sent to a specific country
+ new AiAgentToolQuery
+ {
+ Name = "retrieve-orders-sent-to-a-specific-country",
+ Description =
+ "a query tool that allows you to retrieve all orders sent to a specific country",
+ Query =
+ "from Orders as O where O.ShipTo.Country == " +
+ "$country select O.Employee, O.Lines.Quantity",
+ ParametersSampleObject = "{}"
+ },
+
+ // Set a query tool to retrieve the performer's residence region details from the database
+ new AiAgentToolQuery
+ {
+ Name = "retrieve-performer-living-region",
+ Description =
+ "a query tool that allows you to retrieve an employee's country, city, and " +
+ "region, by the employee's ID",
+ Query = "from Employees as E where id() == $employeeId select E.Address.Country, " +
+ "E.Address.City, E.Address.Region",
+ ParametersSampleObject = "{" +
+ "\"employeeId\": \"embed the employee's ID here\"" +
+ "}"
+ }
+ ];
+
+ agent.Actions =
+ [
+ // Set an action tool to store the performer's details
+ new AiAgentToolAction
+ {
+ Name = "store-performer-details",
+ Description =
+ "an action tool that allows you to store the ID of the employee that made " +
+ "the largest profit, the profit, and your suggestions for a reward, in the database.",
+ ParametersSampleObject = "{" +
+ "\"suggestedReward\": \"embed your suggestions for a reward here\", " +
+ "\"employeeId\": \"embed the employee’s ID here\", " +
+ "\"profit\": \"embed the employee’s profit here\"" +
+ "}"
+ }
+ ];
+
+ // Set chat trimming configuration
+ AiAgentSummarizationByTokens summarization = new AiAgentSummarizationByTokens()
+ {
+ // Summarize old messages When the number of tokens stored in the conversation exceeds this limit
+ MaxTokensBeforeSummarization = 32768,
+ // Max number of tokens that the conversation is allowed to contain after summarization
+ MaxTokensAfterSummarization = 1024
+ };
+
+ agent.ChatTrimming = new AiAgentChatTrimmingConfiguration(summarization);
+
+ // Limit the number of times the LLM can request for tools in response to a single user prompt
+ agent.MaxModelIterationsPerCall = 3;
+
+ var createResult = await store.AI.CreateAgentAsync(agent, new Performer
+ {
+ suggestedReward = "your suggestions for a reward",
+ employeeId = "the ID of the employee that made the largest profit",
+ profit = "the profit the employee made"
+ });
+
+ // Set chat ID, prefix, agent parameters.
+ // (specific country activates one query tool,"everywhere" activates another)
+ var chat = store.AI.Conversation(
+ createResult.Identifier,
+ "Performers/",
+ new AiConversationCreationOptions().AddParameter("country", "France"));
+
+ // Handle the action tool that the LLM uses to store the performer's details in the database
+ chat.Handle("store-performer-details", (Performer performer) =>
+ {
+ using (var session1 = store.OpenSession())
+ {
+ // store values in Performers collection in database
+ session1.Store(performer);
+ session1.SaveChanges();
+ }
+ return "done";
+ });
+
+ // Set user prompt and run chat
+ chat.SetUserPrompt("send a few suggestions to reward the employee that made the largest profit");
+
+ var LLMResponse = await chat.RunAsync(CancellationToken.None);
+
+ if (LLMResponse.Status == AiConversationResult.Done)
+ {
+ // The LLM successfully processed the user prompt and returned its response.
+ // The performer's ID, profit, and suggested rewards were stored in the Performers
+ // collection by the action tool, and are also returned in the final LLM response.
+ }
+}
+```
+
+
+
diff --git a/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_studio.mdx b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_studio.mdx
new file mode 100644
index 0000000000..f98cd48758
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_studio.mdx
@@ -0,0 +1,377 @@
+---
+title: "Creating AI agents: Studio"
+hide_table_of_contents: true
+sidebar_label: Studio
+sidebar_position: 2
+---
+
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+# AI Agents Integration: Studio
+
+
+* In this article:
+ * [Create AI Agent](../../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_studio#create-ai-agent)
+ * [Configure basic settings](../../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_studio#configure-basic-settings)
+ * [Set agent parameters](../../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_studio#set-agent-parameters)
+ * [Define agent tools](../../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_studio#define-agent-tools)
+ * [Add query tools](../../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_studio#add-query-tools)
+ * [Add action tools](../../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_studio#add-action-tools)
+ * [Configure chat trimming](../../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_studio#configure-chat-trimming)
+ * [Save and Run your agent](../../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_studio#save-and-run-your-agent)
+ * [Start new chat](../../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_studio#start-new-chat)
+ * [Agent interaction](../../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_studio#agent-interaction)
+ * [Action tool dialog](../../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_studio#action-tool-dialog)
+ * [Agent results](../../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_studio#agent-results)
+ * [Test your agent](../../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_studio#test-your-agent)
+ * [Runtime view and Test results](../../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_studio#runtime-view-and-test-results)
+
+
+
+
+
+## Create AI Agent
+
+To create an AI agent, open **AI hub > AI Agents** and click **Add new agent**.
+
+
+
+1. **AI Hub**
+ Click to open the [AI Hub view](../../../ai-integration/ai-tasks-list-view.mdx).
+ Use this view to handle AI connection strings and tasks, and to view task statistics.
+2. **AI Agents**
+ Click to open the AI Agents view.
+ Use this view to list, configure, or remove your agents.
+3. **Add new agent**
+ Click to add an AI agent.
+
+ The **Create AI Agent** dialog will open, allowing you to define and test your agent.
+
+ 
+
+ Use the buttons at the bottom bar to Cancel, Save, or Test your changes.
+
+4. **Filter by name**
+ When multiple agents are created, you can filter them by a string you enter here.
+
+5. **Defined agent**
+ After defining an agent, it is listed in this view, allowing you to run, edit, or remove the agent.
+
+
+
+## Configure basic settings
+
+
+
+1. **Agent name**
+ Enter a name for the agent.
+ E.g., **CustomerSupportAgent**
+
+2. **Identifier**
+ Enter a unique identifier for the agent,
+ or click **Regenerate** to create it automatically.
+
+3. **Connection String**
+
+ 
+
+ **Select** an existing [connection string](../../../ai-integration/connection-strings/connection-strings-overview.mdx)
+ that the agent will use to connect your LLM of choice,
+ or click **Create a new AI connection string** to define a new string.
+ Your agent can use a local LLM like Ollama, or an external model like OpenAI.
+
+ 
+
+4. **System prompt**
+ Enter a prompt that determines LLM characteristics like its role and purpose.
+
+5. **Sample response object** and **Response JSON schema**
+ Define a response JSON object for the LLM reply, either as a sample object or as a formal schema.
+ - The response object guides the LLM in composing its replies, and can ease their parsing
+ by the client.
+ - Defining a sample object is normally simpler.
+ - Behind the scenes, RavenDB will translate a sample object to a JSON schema format
+ before sending it to the LLM, but if you prefer it you can define it yourself.
+ - After defining a sample object, you can open the schema tab and click the "View schema"
+ button to see the generated schema.
+ 
+
+
+
+## Set agent parameters
+
+Define **agent parameters**.
+After defining an agent parameter, it can be included in query tools RQL queries.
+Values for agent parameters are provided by the client when a conversation is started.
+[Read more about parameters](../../../ai-integration/ai-agents/ai-agents_overview#query-parameters).
+
+
+
+1. **Add new parameter**
+ Click to add an agent parameter.
+
+2. **Name**
+ Enter agent parameter name.
+
+3. **Description**
+ Describe the parameter in plain language so the LLM would understand its purpose.
+
+4. **Remove parameter**
+ Remove a defined parameter from the list.
+
+
+
+## Define agent tools
+
+Define **Query** and **Action** agent tools.
+
+* Query tools you define here can be freely used by the LLM.
+ * Query tools can trigger the agent to retrieve data from the database and return it to
+ the LLM.
+ * Action tools can trigger the client to perform actions such as removing a spam entry from
+ a comments section or adding a comment to an article.
+* The LLM has no direct access to the database or any other server property, all queries and
+ actions are performed through the agent.
+* [Find an AI agent usage flow chart here](../../../ai-integration/ai-agents/ai-agents_overview#ai-agent-usage-flow-chart)
+
+
+
+1. **Query tools**
+ Click to add a new query tool.
+
+2. **Action tools**
+ Click to add a new action tool.
+
+### Add query tools:
+
+
+
+1. **Add new query tool**
+ Click to add a new query tool.
+
+2. **Remove**
+ Click to remove this tool.
+
+3. **Expand/Collapse tool**
+ Click to expand or collapse the tool's details.
+
+4. **Tool name**
+ Enter a name for the query tool.
+
+5. **Description**
+ Write a description that will explain to the LLM in natural language what the attached query can be used for.
+ E.g., `apply this query when you need to retrieve the details of all the companies that reside in a certain country`
+
+6. **Allow model queries**
+ Enable to allow the LLM to trigger the execution of this query tool.
+ Disable to prevent the LLM from using this tool.
+
+ When disabled, the LLM will not be able to trigger this tool - but if the tool is set as an [initial-context query](../../../ai-integration/ai-agents/ai-agents_overview#initial-context-queries) the agent will still be able to execute it when it is started.
+
+
+7. **Add to initial context**
+ Enable to set the query tool as an [initial-context query](../../../ai-integration/ai-agents/ai-agents_overview#initial-context-queries).
+ When enabled, the agent will execute the query immediately when it starts a conversation with the LLM without waiting for the LLM to invoke the tool, to include data that is relevant for the conversation in the initial context sent to the LLM.
+ Disable to prevent the agent from executing the query on startup.
+
+ An initial-context query is **not allowed** to use LLM parameters, since the LLM will not have the opportunity to fill the parameters with values before the query is executed.
+ The query **can** use agent parameters, whose values are provided by the user when the conversation is started.
+
+
+8. **Query**
+ Enter the query that the agent will run when the LLM requests it to use this tool.
+
+9. **Sample parameters object** and **Parameters JSON schema**
+ Set a schema (as a sample object or a formal JSON schema) that allows the LLM to fill query parameters with values.
+ [Read more about query parameters](../../../ai-integration/ai-agents/ai-agents_overview#query-parameters)
+
+### Add action tools:
+
+
+
+1. **Add new action tool**
+ Click to add a new action tool.
+
+2. **Remove**
+ Click to remove this tool.
+
+3. **Expand/Collapse tool**
+ Click to expand or collapse the tool's details.
+
+4. **Tool name**
+ Enter a name for the action tool.
+
+5. **Description**
+ Enter a description that explains to the LLM in natural language when this action tool should be applied.
+ E.g., `apply this action tool when you need to create a new summary document`
+
+6. **Sample parameters object** and **Parameters JSON schema**
+ Set a sample object or a JSON schema that the LLM can populate when it invokes the action tool. The agent will pass this information to the client to guide it through the action it is requested to perform.
+
+ If you define both a sample response object and a schema, only the schema will be used.
+
+
+
+## Configure chat trimming
+
+LLMs have no memory of prior interactions.
+To allow a continuous conversation, each time the agent sends to the LLM a new prompt or request, it sends along with it the whole conversation up to this point.
+To minimize the size of such messages, you can set the agent to summarize conversations.
+
+
+
+1. **Summarize chat**
+ Use this option to limit the size of the conversation history. If its size breaches this limit, chat history will be summarized before it is sent to the LLM.
+
+2. **Max tokens Before summarization**
+ If the conversation contains a total number of tokens larger than the limit you set here, the conversation will be summarized.
+
+3. **Max tokens After summarization**
+ Set the maximum number of tokens that will be left in the conversation after it is summarized.
+ Messages exceeding this limit will be removed, starting with the oldest.
+
+4. **History**
+ * **Enable history**
+ When history is enabled, the conversation sent to the LLM will be summarized, but a copy of the original conversation will be kept in a dedicated document in the `@conversations-history` collection.
+ * **Set history expiration**
+ When this option is enabled, conversations will be deleted from the
+ `@conversations-history` collection once their age exceeds the period
+ you set.
+
+
+
+## Save and Run your agent
+
+When you're done configuring your agent, save it using the **save** button at the bottom.
+
+
+
+You will find your agent in the main **AI Agents** view, where you can run or edit it.
+
+
+
+1. **Start new chat**
+ Click to start your agent.
+
+2. **Edit agent**
+ Click to edit the agent.
+
+### Start new chat:
+
+Starting a new chat will open the chat window, where you can provide values
+for the parameters you defined for this agent and enter a user prompt that explains
+to the agent what you expect from this session.
+
+
+
+1. **Conversation ID or prefix**
+ - Entering **a prefix** (e.g. `Chats/`) will start a new conversation, with the prefix preceding an automatically-created conversation ID.
+ - Entering **the ID of a conversation that doesn't exist yet** will also start a new conversation.
+ - Entering **the ID of an existing conversation** will send the entire conversation to the LLM and allow you to continue where you left off.
+
+2. **Set expiration**
+ Enable this option and set an expiration period to automatically delete conversations
+ from the `@Conversations` collection when their age exceeds the set period.
+
+3. **Agent parameters**
+ Enter a value for each parameter defined in the agent configuration.
+ The LLM will embed these values in query tools RQL queries where you
+ included agent parameters.
+ E.g., If you enter `France` here as the value for `Country`,
+ a query tool's `from "Orders" where ShipTo.Country == $country` RQL query
+ will be executed as `from "Orders" where ShipTo.Country == "France"`.
+
+4. **User prompt**
+ Use the user prompt to explain to the agent, in natural language, what
+ this session is about.
+
+### Agent interaction:
+
+Running the agent presents its components and interactions.
+
+Agent parameters and their values:
+
+
+The system prompt set for the LLM and the user prompt:
+
+
+The query tools and their activity:
+
+
+You can view the raw data of the agent's activity in JSON form as well:
+
+
+### Action tool dialog:
+
+If the agent runs action tools, you will be given a dialog that shows you the
+information provided by the LLM when it requests the action, and a dialog inviting
+you to enter the results when you finish performing it.
+
+
+
+### Agent results:
+
+And finally, when the AI model finishes its work and negotiations, you will be able to see its response.
+As with all other dialog boxes, you can expand the view to see the content or minimize it to see it in its context.
+
+
+
+
+
+## Test your agent
+
+You can test your agent while creating or editing it, to examine its configuration and operability before you deploy it. The test interface resembles the one you see when you run your agent normally via Studio, but conversations are not kept in the `@conversations` or `@conversations-history` collections.
+
+To test your agent, click **Test** at the bottom of the agent configuration view.
+
+
+
+
+
+
+1. **New Chat**
+ Click to start a new chat
+2. **Close**
+ Click to return to the AI Agents configuration view.
+3. **Enter parameter value**
+ Enter a value for each parameter defined in the agent configuration.
+ The LLM will be able to replace these parameters with fixed values
+ when it uses query or action tools in which these parameters are embedded.
+4. **Agent prompt**
+ Explain to the agent in natural language what this session is about.
+5. **Send prompt**
+ Click to pass your agent your parameter values and user prompt and run the test.
+ You can keep sending prompts to the agent and receiving its replies in
+ a continuous conversation.
+
+### Runtime view and Test results:
+
+You will see the components that take part in the agent's run and be able
+to enter and send requested information for action tools. Each tool can be
+minimized to see it in context or expanded to view the data it carries.
+
+
+
+When the LLM finishes processing, you will see its response.
+
+
+
+You can expand the dialog or copy the content to see the response in detail.
+
+
+{`\{
+ "EmployeeID": "employees/1-A",
+ "EmployeeProfit": "1760",
+ "SuggestedRewards": "The employee lives in Redmond, WA, USA. For a special reward, consider a weekend getaway to the Pacific Northwest's scenic sites such as a stay at a luxury resort in Seattle or a relaxing wine tasting tour in Woodinville. Alternatively, you could offer gift cards for outdoor excursions in the Cascade Mountains or tickets to major cultural events in the Seattle area."
+\}
+`}
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/ai-integration/ai-integration_start.mdx b/versioned_docs/version-7.1/ai-integration/ai-integration_start.mdx
new file mode 100644
index 0000000000..9b249152b1
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/ai-integration_start.mdx
@@ -0,0 +1,93 @@
+---
+title: "AI Integration"
+hide_table_of_contents: true
+sidebar_label: "Start"
+sidebar_position: 0
+---
+
+import Admonition from '@theme/Admonition';
+import CardWithImage from "@site/src/components/Common/CardWithImage";
+import CardWithImageHorizontal from "@site/src/components/Common/CardWithImageHorizontal";
+import ColGrid from "@site/src/components/ColGrid";
+
+import buildVsBuyStartImage from "./assets/ai-start_ai-agents_build-vs-buy.png";
+import vectorSearchIntroImage from "./assets/ai-start_vector-search_intro.png";
+import practicalLookAiAgentsImage from "./assets/practical-look-ai-agents-article-image.webp";
+
+import ayendeBlogImage from "@site/static/img/from-ayende-com.webp";
+import webinarThumbnailPlaceholder from "@site/static/img/webinar.webp";
+import discordThumbnailPlaceholder from "@site/static/img/discord.webp";
+
+# AI Integration
+Ship AI-powered features faster with RavenDB’s native tools.
+
+### Native AI features that create intelligent applications
+RavenDB is equipped with a set of powerful native AI features that can
+be used independently or in conjunction with each other, allowing you to easily integrate advanced AI capabilities into your applications.
+These features include [AI agents](../ai-integration/ai-integration_start#ai-agents), [GenAI tasks](../ai-integration/ai-integration_start#genai-tasks), [Embeddings generation](../ai-integration/ai-integration_start#embeddings-generation), and [Vector search](../ai-integration/ai-integration_start#vector-search).
+
+### Use cases
+RavenDB AI features help you ship any AI-related scenario quickly, including:
+* **Conversational intelligence** - Natural-language chatbots, assistants, and interactive workflows.
+* **Automated content enrichment** - Summarization, translation, classification, and document enhancement.
+* **Semantic representation** - Creating vector representations for text, images, and other data types.
+* **Similarity-based discovery** - Finding related items, aggregation, and context-aware retrieval.
+* **Personalization & recommendations** - Tailoring suggestions, feeds, and user experiences.
+* **Content moderation & compliance** - Automatically handling sensitive, inappropriate, or non-compliant content.
+* **Knowledge management & Q&A** - Asking questions over policies, wikis, and documents; retrieving answers and citations.
+
+#### Learn more: In-depth AI features articles
+
+
+
+
+
+### AI agents
+AI agents are conversational proxy components that reside on the server and autonomously handle client requests using an AI model. Instead of spending your time on integrating AI capabilities into your application, you can rapidly configure AI agents using Studio or the client API. Agents can securely read from the database and request the client for actions on behalf of the AI model, infusing intelligence into the workflow. Whether you need chatbots, automated reporting, or intelligent data processing, you get immediate production-ready AI features without the integration overhead.
+
+
+
+
+
+
+### GenAI tasks
+GenAI tasks are configurable [ongoing operations](../studio/database/tasks/ongoing-tasks/general-info) that process your documents systematically in the background using an AI model. Instead of building custom AI integration pipelines yourself, you can easily create tasks that weave AI capabilities into your data flow. They can enrich documents with AI-generated content, validate and categorize data, translate documents, or execute countless other automated workflows that leverage AI capabilities.
+
+
+
+
+
+
+### Embeddings generation
+Embeddings generation tasks transform your content into semantic vectors that enable intelligent similarity-based searches. Instead of building complex search infrastructure, you can utilize native tasks that seamlessly embed vector capabilities into your data, enabling intelligent search by meaning and context.
+
+
+
+
+
+
+### Vector search
+Vector search enables intelligent similarity-based discovery using embeddings rather than exact matching. Instead of developing custom similarity algorithms yourself, you can employ native vector operations for diverse applications. Whether you need to categorize content, find similar items, or automate recommendations, vector search delivers intelligent matching capabilities that understand meaning and context.
+
+
+
+
+
+
+### Related lives & Videos
+Watch our broadcasts to see RavenDB's AI features in action and learn practical implementation techniques.
+
+
+
+
+
+
+
+### Deep dives, content & resources
+Find additional resources to enhance your knowledge and skills.
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/ai-integration/ai-tasks-list-view.mdx b/versioned_docs/version-7.1/ai-integration/ai-tasks-list-view.mdx
new file mode 100644
index 0000000000..63603c7664
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/ai-tasks-list-view.mdx
@@ -0,0 +1,59 @@
+---
+title: "AI Tasks - List View"
+hide_table_of_contents: true
+sidebar_label: AI Tasks - List View
+sidebar_position: 5
+---
+
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+# AI Tasks - List View
+
+
+
+* RavenDB supports the following AI tasks:
+ * [Embeddings generation task](../ai-integration/generating-embeddings/overview.mdx)
+ * [Gen AI task](../ai-integration/gen-ai-integration/gen-ai-overview.mdx)
+
+* AI tasks are part of RavenDB's ongoing tasks.
+ Learn more in [Ongoing Tasks - Overview](../studio/database/tasks/ongoing-tasks/general-info.mdx).
+
+* In the **AI Tasks - List view**, you can manage RavenDB's AI tasks -
+ create new tasks, edit existing ones, or delete them as needed.
+
+* In this article:
+ * [AI Tasks - list view](../ai-integration/ai-tasks-list-view.mdx#ai-tasks---list-view)
+
+
+
+## AI Tasks - list view
+
+
+
+1. Go to **AI Hub > AI Tasks**.
+
+2. **Add AI Task**: Click to create a new AI task.
+
+3. **Task name**: The name of an existing AI task.
+
+4. **Task type**: The type of task: _Embeddings Generation_ or _Gen AI_.
+
+5. **Assigned node**: The node in the database group that is responsible for running the task.
+
+6. **Enable/Disable**: Click to enable or disable the task.
+
+7. **Details**: Click to view detailed information about the task.
+
+8. **Edit**: Click to modify the task.
+
+9. **Delete**: Click to remove the task.
+
+10. **Identifier**: The string identifier defined for the task.
+ **Connection string**: The name of the connection string used by the task.
+
+11. **Task status**: Displays the task’s current state and progress.
diff --git a/versioned_docs/version-7.1/ai-integration/assets/ai-start_ai-agents_build-vs-buy.png b/versioned_docs/version-7.1/ai-integration/assets/ai-start_ai-agents_build-vs-buy.png
new file mode 100644
index 0000000000..990075161e
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/assets/ai-start_ai-agents_build-vs-buy.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/assets/ai-start_vector-search_intro.png b/versioned_docs/version-7.1/ai-integration/assets/ai-start_vector-search_intro.png
new file mode 100644
index 0000000000..55a9e11828
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/assets/ai-start_vector-search_intro.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/assets/ai-tasks-list-view.png b/versioned_docs/version-7.1/ai-integration/assets/ai-tasks-list-view.png
new file mode 100644
index 0000000000..23356aa2b3
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/assets/ai-tasks-list-view.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/assets/practical-look-ai-agents-article-image.webp b/versioned_docs/version-7.1/ai-integration/assets/practical-look-ai-agents-article-image.webp
new file mode 100644
index 0000000000..905ca8f6f3
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/assets/practical-look-ai-agents-article-image.webp differ
diff --git a/versioned_docs/version-7.1/ai-integration/connection-strings/_category_.json b/versioned_docs/version-7.1/ai-integration/connection-strings/_category_.json
new file mode 100644
index 0000000000..2fd9012301
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/connection-strings/_category_.json
@@ -0,0 +1,4 @@
+{
+ "position": 4,
+ "label": "Connection Strings"
+}
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/ai-integration/connection-strings/assets/azure-open-ai-1.png b/versioned_docs/version-7.1/ai-integration/connection-strings/assets/azure-open-ai-1.png
new file mode 100644
index 0000000000..3d8836044c
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/connection-strings/assets/azure-open-ai-1.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/connection-strings/assets/azure-open-ai-2.png b/versioned_docs/version-7.1/ai-integration/connection-strings/assets/azure-open-ai-2.png
new file mode 100644
index 0000000000..bc603828aa
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/connection-strings/assets/azure-open-ai-2.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/connection-strings/assets/connection-strings-view.png b/versioned_docs/version-7.1/ai-integration/connection-strings/assets/connection-strings-view.png
new file mode 100644
index 0000000000..a71909ddca
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/connection-strings/assets/connection-strings-view.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/connection-strings/assets/create-connection-string.png b/versioned_docs/version-7.1/ai-integration/connection-strings/assets/create-connection-string.png
new file mode 100644
index 0000000000..0aafce2d08
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/connection-strings/assets/create-connection-string.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/connection-strings/assets/embedded.png b/versioned_docs/version-7.1/ai-integration/connection-strings/assets/embedded.png
new file mode 100644
index 0000000000..c060cfbf1d
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/connection-strings/assets/embedded.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/connection-strings/assets/google-ai.png b/versioned_docs/version-7.1/ai-integration/connection-strings/assets/google-ai.png
new file mode 100644
index 0000000000..3bd83f2e63
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/connection-strings/assets/google-ai.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/connection-strings/assets/hugging-face.png b/versioned_docs/version-7.1/ai-integration/connection-strings/assets/hugging-face.png
new file mode 100644
index 0000000000..ac263c8ad6
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/connection-strings/assets/hugging-face.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/connection-strings/assets/mistral-ai.png b/versioned_docs/version-7.1/ai-integration/connection-strings/assets/mistral-ai.png
new file mode 100644
index 0000000000..cd7f4b6d95
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/connection-strings/assets/mistral-ai.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/connection-strings/assets/ollama-1.png b/versioned_docs/version-7.1/ai-integration/connection-strings/assets/ollama-1.png
new file mode 100644
index 0000000000..9dd0dcddab
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/connection-strings/assets/ollama-1.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/connection-strings/assets/ollama-2.png b/versioned_docs/version-7.1/ai-integration/connection-strings/assets/ollama-2.png
new file mode 100644
index 0000000000..04f0524aac
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/connection-strings/assets/ollama-2.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/connection-strings/assets/open-ai-1.png b/versioned_docs/version-7.1/ai-integration/connection-strings/assets/open-ai-1.png
new file mode 100644
index 0000000000..6b0f1d1a4d
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/connection-strings/assets/open-ai-1.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/connection-strings/assets/open-ai-2.png b/versioned_docs/version-7.1/ai-integration/connection-strings/assets/open-ai-2.png
new file mode 100644
index 0000000000..182c61ca8f
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/connection-strings/assets/open-ai-2.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/connection-strings/assets/vertex-ai.png b/versioned_docs/version-7.1/ai-integration/connection-strings/assets/vertex-ai.png
new file mode 100644
index 0000000000..28b5c515dc
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/connection-strings/assets/vertex-ai.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/connection-strings/azure-open-ai.mdx b/versioned_docs/version-7.1/ai-integration/connection-strings/azure-open-ai.mdx
new file mode 100644
index 0000000000..a620c7b91d
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/connection-strings/azure-open-ai.mdx
@@ -0,0 +1,33 @@
+---
+title: "Connection String to Azure OpenAI"
+hide_table_of_contents: true
+sidebar_label: Azure OpenAI
+sidebar_position: 1
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import AzureOpenAiCsharp from './content/_azure-open-ai-csharp.mdx';
+
+export const supportedLanguages = ["csharp"];
+
+
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/ai-integration/connection-strings/connection-strings-overview.mdx b/versioned_docs/version-7.1/ai-integration/connection-strings/connection-strings-overview.mdx
new file mode 100644
index 0000000000..48104f4d14
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/connection-strings/connection-strings-overview.mdx
@@ -0,0 +1,121 @@
+---
+title: "AI Connection Strings - Overview"
+hide_table_of_contents: true
+sidebar_label: Overview
+sidebar_position: 0
+---
+
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+# AI Connection Strings - Overview
+
+
+
+* AI connection strings define how RavenDB connects to external AI models.
+ Each connection string specifies the provider and the model to use.
+ The model can be either a chat model or a text embedding model.
+
+* These connection strings are then used by AI-powered features in RavenDB, such as:
+ * [Embeddings Generation Tasks](../../ai-integration/generating-embeddings/overview.mdx) -
+ use **text embedding models** to generate vector embeddings from document content for vector search.
+ * [Gen AI Tasks](../../ai-integration/gen-ai-integration/gen-ai-overview) and [AI Agents](../../ai-integration/ai-agents/ai-agents_overview.mdx) -
+ interact with **chat models** for reasoning, summarization, or conversational workflows.
+
+* RavenDB supports connecting to the following external providers:
+ [OpenAI & OpenAI compatible providers](../../ai-integration/connection-strings/open-ai.mdx),
+ [Azure OpenAI](../../ai-integration/connection-strings/azure-open-ai.mdx),
+ [Google AI](../../ai-integration/connection-strings/google-ai.mdx),
+ [Vertex AI](../../ai-integration/connection-strings/vertex-ai.mdx),
+ [Ollama](../../ai-integration/connection-strings/ollama.mdx),
+ [Hugging Face](../../ai-integration/connection-strings/hugging-face.mdx),
+ and [Mistral AI](../../ai-integration/connection-strings/mistral-ai.mdx),
+ or to RavenDB’s [embedded model (_bge-micro-v2_)](../../ai-integration/connection-strings/embedded.mdx).
+
+* While each task can have only one connection string,
+ you can define multiple connection strings in your database to support different providers or configurations.
+ A single connection string can also be reused across multiple tasks in the database.
+
+* The AI connection strings can be created from:
+ * The **AI Connection Strings view in the Studio** -
+ where you can create, edit, and delete connection strings that are not in use.
+ * The **Client API** -
+ examples are available in the dedicated articles for each provider.
+
+---
+
+* In this article:
+ * [The AI Connection Strings view](../../ai-integration/connection-strings/connection-strings-overview.mdx#the-ai-connection-strings-view)
+ * [Creating an AI connection string (from the Studio)](../../ai-integration/connection-strings/connection-strings-overview.mdx#creating-an-ai-connection-string-from-the-studio)
+
+
+
+## The AI Connection Strings view
+
+
+
+1. Go to the **AI Hub** menu.
+
+2. Open the **AI Connection Strings** view.
+
+3. Click **"Add new"** to create a new connection string.
+
+4. View the list of all AI connection strings that have been defined.
+
+5. Edit or delete a connection string.
+ Only connection strings that are not in use by a task can be deleted.
+
+## Creating an AI connection string (from the Studio)
+
+
+
+
+
+1. **Name**
+ Enter a unique name for the connection string.
+
+2. **Identifier**
+ Enter a unique identifier for the connection string.
+ Each AI connection string in the database must have a distinct identifier.
+
+ If not specified, or when clicking the "Regenerate" button,
+ RavenDB automatically generates the identifier based on the connection string name. For example:
+ * If the connection string name is: _"My connection string to Google AI"_
+ * The generated identifier will be: _"my-connection-string-to-google-ai"_
+
+ Allowed characters: only lowercase letters (a-z), numbers (0-9), and hyphens (-).
+ For exmaple, see how this identifier is used in the [embeddings cache collection](../../ai-integration/generating-embeddings/embedding-collections.mdx#the-embeddings-cache-collection).
+
+3. **Regenerate**
+ Click "Regenerate" to automatically create an identifier based on the connection string name.
+
+4. **Model type**
+ Select the type of model you want to interact with:
+ * **Chat model**
+ Select this type to use a conversational model for content generation and dialogue.
+ * **Text embedding model**
+ Select this type to generate vector embeddings from your document content for vector search.
+
+5. **Connector**
+ Select an AI provider from the dropdown menu.
+ This opens a dialog where you can configure the connection details for the selected provider.
+
+ The list of available providers is filtered based on the selected model type.
+ (Some providers are currently supported in RavenDB only for text embedding models).
+
+ Configuration details for each provider are explained in the following articles:
+ * [Azure Open AI](../../ai-integration/connection-strings/azure-open-ai.mdx)
+ * [Google AI](../../ai-integration/connection-strings/google-ai.mdx) (_embeddings only_)
+ * [Hugging Face](../../ai-integration/connection-strings/hugging-face.mdx) (_embeddings only_)
+ * [Ollama](../../ai-integration/connection-strings/ollama.mdx)
+ * [OpenAI](../../ai-integration/connection-strings/open-ai.mdx)
+ * [Mistral AI](../../ai-integration/connection-strings/mistral-ai.mdx) (_embeddings only_)
+ * [Vertex AI](../../ai-integration/connection-strings/vertex-ai.mdx) (_embeddings only_)
+ * [Embedded model (bge-micro-v2)](../../ai-integration/connection-strings/embedded.mdx) (_embeddings only_)
+
+6. Once you complete all configurations for the selected provider in the dialog,
+ save the connection string definition.
diff --git a/versioned_docs/version-7.1/ai-integration/connection-strings/content/_azure-open-ai-csharp.mdx b/versioned_docs/version-7.1/ai-integration/connection-strings/content/_azure-open-ai-csharp.mdx
new file mode 100644
index 0000000000..5f3ff1627b
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/connection-strings/content/_azure-open-ai-csharp.mdx
@@ -0,0 +1,206 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* This article explains how to define a connection string to the [Azure OpenAI Service](https://azure.microsoft.com/en-us/products/ai-services/openai-service),
+ enabling RavenDB to use Azure OpenAI models for [Embeddings generation tasks](../../../ai-integration/generating-embeddings/overview.mdx),
+ [Gen AI tasks](../../../ai-integration/gen-ai-integration/gen-ai-overview.mdx), and [AI agents](../../../ai-integration/ai-agents/ai-agents_overview.mdx).
+
+* In this article:
+ * [Define the connection string - from the Studio](../../../ai-integration/connection-strings/azure-open-ai.mdx#define-the-connection-string---from-the-studio)
+ * [Configuring a text embedding model](../../../ai-integration/connection-strings/azure-open-ai.mdx#configuring-a-text-embedding-model)
+ * [Configuring a chat model](../../../ai-integration/connection-strings/azure-open-ai.mdx#configuring-a-chat-model)
+ * [Define the connection string - from the Client API](../../../ai-integration/connection-strings/azure-open-ai.mdx#define-the-connection-string---from-the-client-api)
+ * [Syntax](../../../ai-integration/connection-strings/azure-open-ai.mdx#syntax)
+
+
+
+## Define the connection string - from the Studio
+
+### Configuring a text embedding model
+
+
+
+1. **Name**
+ Enter a name for this connection string.
+
+2. **Identifier** (optional)
+ Enter an identifier for this connection string.
+ Learn more about the identifier in the [connection string identifier](../../../ai-integration/connection-strings/connection-strings-overview.mdx#identifier) section.
+
+3. **Model Type**
+ Select "Text Embeddings".
+
+4. **Connector**
+ Select **Azure OpenAI** from the dropdown menu.
+
+5. **API key**
+ Enter the API key used to authenticate requests to the Azure OpenAI service.
+
+6. **Endpoint**
+ Enter the base URL of your Azure OpenAI resource.
+
+7. **Model**
+ Select or enter an Azure OpenAI text embedding model from the dropdown list or enter a new one.
+
+8. **Deployment name**
+ Specify the unique identifier assigned to your model deployment in your Azure environment.
+
+9. **Dimensions** (optional)
+ * Specify the number of dimensions for the output embeddings.
+ Supported only by _text-embedding-3_ and later models.
+ * If not specified, the model's default dimensionality is used.
+
+10. **Max concurrent query batches**: (optional)
+ * When making vector search queries, the content of the search terms must also be converted to embeddings to compare them against the stored vectors.
+ Requests to generate such query embeddings via the AI provider are sent in batches.
+ * This parameter defines the maximum number of these batches that can be processed concurrently.
+ You can set a default value using the [Ai.Embeddings.MaxConcurrentBatches](../../../server/configuration/ai-integration-configuration.mdx#aiembeddingsmaxconcurrentbatches) configuration key.
+
+11. Click **Test Connection** to confirm the connection string is set up correctly.
+
+12. Click **Save** to store the connection string or **Cancel** to discard changes.
+
+### Configuring a chat model
+
+* When configuring a chat model, the UI displays the same base fields as those used for [text embedding models](../../../ai-integration/connection-strings/azure-open-ai.mdx#configuring-a-text-embedding-model),
+ including the connection string _Name_, optional _Identifier_, _API Key_, _Endpoint_, _Deployment Name_, and _Model_ name.
+
+* One additional setting is specific to chat models: _Temperature_.
+
+
+
+1. **Model Type**
+ Select "Chat".
+
+2. **Model**
+ Enter the name of the Azure OpenAI model to use for chat completions.
+
+3. **Temperature** (optional)
+ The temperature setting controls the randomness and creativity of the model’s output.
+ Valid values typically range from `0.0` to `2.0`:
+ * Higher values (e.g., `1.0` or above) produce more diverse and creative responses.
+ * Lower values (e.g., `0.2`) result in more focused, consistent, and deterministic output.
+ * If not explicitly set, Azure OpenAI uses a default temperature of `1.0`.
+ See [Azure OpenAI chat completions parameters](https://learn.microsoft.com/en-us/azure/ai-foundry/openai/reference#request-body-2).
+
+---
+
+## Define the connection string - from the Client API
+
+
+
+```csharp
+using (var store = new DocumentStore())
+{
+ // Define the connection string to Azure OpenAI
+ var connectionString = new AiConnectionString
+ {
+ // Connection string Name & Identifier
+ Name = "ConnectionStringToAzureOpenAI",
+ Identifier = "identifier-to-the-connection-string", // optional
+
+ // Model type
+ ModelType = AiModelType.TextEmbeddings,
+
+ // Azure OpenAI connection settings
+ AzureOpenAiSettings = new AzureOpenAiSettings
+ {
+ ApiKey = "your-api-key",
+ Endpoint = "https://your-resource-name.openai.azure.com",
+
+ // Name of text embedding model to use
+ Model = "text-embedding-3-small",
+
+ DeploymentName = "your-deployment-name",
+
+ // Optionally, override the default maximum number of query embedding batches
+ // that can be processed concurrently
+ EmbeddingsMaxConcurrentBatches = 10
+ }
+ };
+
+ // Deploy the connection string to the server
+ var putConnectionStringOp = new PutConnectionStringOperation(connectionString);
+ var putConnectionStringResult = store.Maintenance.Send(putConnectionStringOp);
+}
+```
+
+
+```csharp
+using (var store = new DocumentStore())
+{
+ // Define the connection string to Azure OpenAI
+ var connectionString = new AiConnectionString
+ {
+ // Connection string Name & Identifier
+ Name = "ConnectionStringToAzureOpenAI",
+ Identifier = "identifier-to-the-connection-string", // optional
+
+ // Model type
+ ModelType = AiModelType.Chat,
+
+ // Azure OpenAI connection settings
+ AzureOpenAiSettings = new AzureOpenAiSettings
+ {
+ ApiKey = "your-api-key",
+ Endpoint = "https://your-resource-name.openai.azure.com",
+
+ // Name of chat model to use
+ Model = "gpt-4o-mini",
+
+ DeploymentName = "your-deployment-name",
+
+ // Optionally, set the model's temperature
+ Temperature = 0.4
+ }
+ };
+
+ // Deploy the connection string to the server
+ var putConnectionStringOp = new PutConnectionStringOperation(connectionString);
+ var putConnectionStringResult = store.Maintenance.Send(putConnectionStringOp);
+}
+```
+
+
+
+## Syntax
+
+
+```csharp
+public class AiConnectionString
+{
+ public string Name { get; set; }
+ public string Identifier { get; set; }
+ public AiModelType ModelType { get; set; }
+ public AzureOpenAiSettings AzureOpenAiSettings { get; set; }
+}
+
+public class AzureOpenAiSettings : AbstractAiSettings
+{
+ public string ApiKey { get; set; }
+ public string Endpoint { get; set; }
+ public string Model { get; set; }
+ public string DeploymentName { get; set; }
+
+ // Relevant only for text embedding models:
+ // Specifies the number of dimensions in the generated embedding vectors.
+ public int? Dimensions { get; set; }
+
+ // Relevant only for chat models:
+ // Controls the randomness and creativity of the model’s output.
+ // Higher values (e.g., 1.0 or above) produce more diverse and creative responses.
+ // Lower values (e.g., 0.2) result in more focused and deterministic output.
+ // If set to 'null', the temperature is not sent and the model's default will be used.
+ public double? Temperature { get; set; }
+}
+
+public class AbstractAiSettings
+{
+ public int? EmbeddingsMaxConcurrentBatches { get; set; }
+}
+```
+
diff --git a/versioned_docs/version-7.1/ai-integration/connection-strings/content/_embedded-csharp.mdx b/versioned_docs/version-7.1/ai-integration/connection-strings/content/_embedded-csharp.mdx
new file mode 100644
index 0000000000..41a358f906
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/connection-strings/content/_embedded-csharp.mdx
@@ -0,0 +1,100 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* This article explains how to define a connection string to the [bge-micro-v2](https://huggingface.co/TaylorAI/bge-micro-v2) model.
+ This model, designed exclusively for embeddings generation, is embedded within RavenDB, enabling RavenDB to seamlessly handle its
+ [Embeddings generation tasks](../../../ai-integration/generating-embeddings/overview.mdx) without requiring an external AI service.
+
+* Running the model locally consumes processor resources and will impact RavenDB's overall performance,
+ depending on your workload and usage patterns.
+
+* In this article:
+ * [Define the connection string - from the Studio](../../../ai-integration/connection-strings/embedded.mdx#define-the-connection-string---from-the-studio)
+ * [Define the connection string - from the Client API](../../../ai-integration/connection-strings/embedded.mdx#define-the-connection-string---from-the-client-api)
+ * [Syntax](../../../ai-integration/connection-strings/embedded.mdx#syntax)
+
+
+
+## Define the connection string - from the Studio
+
+
+
+1. **Name**
+ Enter a name for this connection string.
+
+2. **Identifier** (optional)
+ Learn more about the identifier in the [connection string identifier](../../../ai-integration/connection-strings/connection-strings-overview.mdx#the-connection-string-identifier) section.
+
+3. **Model Type**
+ Select "Text Embeddings".
+
+4. **Connector**
+ Select **Embedded (bge-micro-v2)** from the dropdown menu.
+
+5. **Max concurrent query batches**: (optional)
+ * When making vector search queries, the content of the search terms must also be converted to embeddings to compare them against the stored vectors.
+ Requests to generate such query embeddings via the AI provider are sent in batches.
+ * This parameter defines the maximum number of these batches that can be processed concurrently.
+ You can set a default value using the [Ai.Embeddings.MaxConcurrentBatches](../../../server/configuration/ai-integration-configuration.mdx#aiembeddingsmaxconcurrentbatches) configuration key.
+
+6. Click **Save** to store the connection string or **Cancel** to discard changes.
+
+## Define the connection string - from the Client API
+
+
+```csharp
+using (var store = new DocumentStore())
+{
+ // Define the connection string to the embedded model
+ var connectionString = new AiConnectionString
+ {
+ // Connection string name & identifier
+ Name = "ConnectionStringToEmbedded",
+ Identifier = "identifier-to-the-connection-string", // optional
+
+ // Model type
+ ModelType = AiModelType.TextEmbeddings,
+
+ // Embedded model settings
+ // No user configuration is required for the embedded model,
+ // as it uses predefined values managed internally by RavenDB.
+ EmbeddedSettings = new EmbeddedSettings()
+ };
+
+ // Optionally, override the default maximum number of query embedding batches
+ // that can be processed concurrently
+ connectionString.EmbeddedSettings.EmbeddingsMaxConcurrentBatches = 10;
+
+ // Deploy the connection string to the server
+ var operation = new PutConnectionStringOperation(connectionString);
+ var putConnectionStringResult = store.Maintenance.Send(operation);
+}
+```
+
+
+## Syntax
+
+
+```csharp
+public class AiConnectionString
+{
+ public string Name { get; set; }
+ public string Identifier { get; set; }
+ public AiModelType ModelType { get; set; }
+ public EmbeddedSettings EmbeddedSettings { get; set; }
+}
+
+public class EmbeddedSettings : AbstractAiSettings
+{
+}
+
+public class AbstractAiSettings
+{
+ public int? EmbeddingsMaxConcurrentBatches { get; set; }
+}
+```
+
diff --git a/versioned_docs/version-7.1/ai-integration/connection-strings/content/_google-ai-csharp.mdx b/versioned_docs/version-7.1/ai-integration/connection-strings/content/_google-ai-csharp.mdx
new file mode 100644
index 0000000000..4c7ea24f46
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/connection-strings/content/_google-ai-csharp.mdx
@@ -0,0 +1,130 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* This article explains how to define a connection string to [Google AI](https://ai.google.dev/gemini-api/docs/embeddings),
+ enabling RavenDB to seamlessly integrate its [Embeddings generation tasks](../../../ai-integration/generating-embeddings/overview.mdx) with Google's AI services.
+
+* This configuration supports **Google AI embeddings** only.
+ It is not compatible with Vertex AI endpoints or credentials.
+
+* RavenDB currently supports only text embeddings with Google AI.
+ Chat models are not supported through this integration.
+
+* In this article:
+ * [Define the connection string - from the Studio](../../../ai-integration/connection-strings/google-ai.mdx#define-the-connection-string---from-the-studio)
+ * [Define the connection string - from the Client API](../../../ai-integration/connection-strings/google-ai.mdx#define-the-connection-string---from-the-client-api)
+ * [Syntax](../../../ai-integration/connection-strings/google-ai.mdx#syntax)
+
+
+
+## Define the connection string - from the Studio
+
+
+
+1. **Name**
+ Enter a name for this connection string.
+
+2. **Identifier** (optional)
+ Enter an identifier for this connection string.
+ Learn more about the identifier in the [connection string identifier](../../../ai-integration/connection-strings/connection-strings-overview.mdx#the-connection-string-identifier) section.
+
+3. **Model Type**
+ Select "Text Embeddings".
+
+4. **Connector**
+ Select **Google AI** from the dropdown menu.
+
+5. **AI Version** (optional)
+ * Select the Google AI API version to use.
+ * If not specified, `V1_Beta` is used. Learn more in [API versions explained](https://ai.google.dev/gemini-api/docs/api-versions).
+
+6. **API key**
+ Enter the API key used to authenticate requests to Google's AI services.
+
+7. **Model**
+ Select or enter the Google AI text embedding model to use.
+
+8. **Dimensions** (optional)
+ * Specify the number of dimensions for the output embeddings.
+ * If not specified, the model's default dimensionality is used.
+
+9. **Max concurrent query batches**: (optional)
+ * When making vector search queries, the content of the search terms must also be converted to embeddings to compare them against the stored vectors.
+ Requests to generate such query embeddings via the AI provider are sent in batches.
+ * This parameter defines the maximum number of these batches that can be processed concurrently.
+ You can set a default value using the [Ai.Embeddings.MaxConcurrentBatches](../../../server/configuration/ai-integration-configuration.mdx#aiembeddingsmaxconcurrentbatches) configuration key.
+
+10. Click **Test Connection** to confirm the connection string is set up correctly.
+
+11. Click **Save** to store the connection string or **Cancel** to discard changes.
+
+## Define the connection string - from the Client API
+
+
+```csharp
+using (var store = new DocumentStore())
+{
+ // Define the connection string to Google AI
+ var connectionString = new AiConnectionString
+ {
+ // Connection string name & identifier
+ Name = "ConnectionStringToGoogleAI",
+ Identifier = "identifier-to-the-connection-string", // optional
+
+ // Model type
+ ModelType = AiModelType.TextEmbeddings,
+
+ // Google AI connection settings
+ GoogleSettings = new GoogleSettings(
+ apiKey: "your-api-key",
+ model: "text-embedding-004",
+ aiVersion: GoogleAIVersion.V1)
+ };
+
+ // Optionally, override the default maximum number of query embedding batches
+ // that can be processed concurrently
+ connectionString.GoogleSettings.EmbeddingsMaxConcurrentBatches = 10;
+
+ // Deploy the connection string to the server
+ var operation = new PutConnectionStringOperation(connectionString);
+ var putConnectionStringResult = store.Maintenance.Send(operation);
+}
+```
+
+
+## Syntax
+
+
+```csharp
+public class AiConnectionString
+{
+ public string Name { get; set; }
+ public string Identifier { get; set; }
+ public AiModelType ModelType { get; set; }
+ public GoogleSettings GoogleSettings { get; set; }
+}
+
+public class GoogleSettings : AbstractAiSettings
+{
+ public string ApiKey { get; set; }
+ public string Model { get; set; }
+ public GoogleAIVersion? AiVersion { get; set; }
+ public int? Dimensions { get; set; }
+}
+
+public enum GoogleAIVersion
+{
+ V1, // Represents the "v1" version of the Google AI API
+ V1_Beta // Represents the "v1beta" version of the Google AI API
+}
+
+public class AbstractAiSettings
+{
+ public int? EmbeddingsMaxConcurrentBatches { get; set; }
+}
+```
+
diff --git a/versioned_docs/version-7.1/ai-integration/connection-strings/content/_hugging-face-csharp.mdx b/versioned_docs/version-7.1/ai-integration/connection-strings/content/_hugging-face-csharp.mdx
new file mode 100644
index 0000000000..7a9db2cd46
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/connection-strings/content/_hugging-face-csharp.mdx
@@ -0,0 +1,116 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* This article explains how to define a connection string to the [Hugging Face's text embedding services](https://huggingface.co/docs/text-embeddings-inference/en/index),
+ enabling RavenDB to seamlessly integrate its [Embeddings generation tasks](../../../ai-integration/generating-embeddings/overview.mdx) within your environment.
+
+* Note: RavenDB currently supports only text embeddings with Hugging Face.
+ Chat models are not supported through this integration.
+
+* In this article:
+ * [Define the connection string - from the Studio](../../../ai-integration/connection-strings/hugging-face.mdx#define-the-connection-string---from-the-studio)
+ * [Define the connection string - from the Client API](../../../ai-integration/connection-strings/hugging-face.mdx#define-the-connection-string---from-the-client-api)
+ * [Syntax](../../../ai-integration/connection-strings/hugging-face.mdx#syntax)
+
+
+
+## Define the connection string - from the Studio
+
+
+
+1. **Name**
+ Enter a name for this connection string.
+
+2. **Identifier** (optional)
+ Learn more about the identifier in the [connection string identifier](../../../ai-integration/connection-strings/connection-strings-overview.mdx#the-connection-string-identifier) section.
+
+3. **Model Type**
+ Select "Text Embeddings".
+
+4. **Connector**
+ Select **Hugging Face** from the dropdown menu.
+
+5. **API key**
+ Enter the API key used to authenticate requests to Hugging Face's text embedding services.
+
+6. **Endpoint** (optional)
+ Select or enter the Hugging Face endpoint for generating embeddings from text.
+ If not specified, the default endpoint is used.
+ (`https://api-inference.huggingface.co/`)
+
+7. **Model**
+ Specify the Hugging Face text embedding model to use.
+
+8. **Max concurrent query batches**: (optional)
+ * When making vector search queries, the content of the search terms must also be converted to embeddings to compare them against the stored vectors.
+ Requests to generate such query embeddings via the AI provider are sent in batches.
+ * This parameter defines the maximum number of these batches that can be processed concurrently.
+ You can set a default value using the [Ai.Embeddings.MaxConcurrentBatches](../../../server/configuration/ai-integration-configuration.mdx#aiembeddingsmaxconcurrentbatches) configuration key.
+
+9. Click **Test Connection** to confirm the connection string is set up correctly.
+
+10. Click **Save** to store the connection string or **Cancel** to discard changes.
+
+## Define the connection string - from the Client API
+
+
+```csharp
+using (var store = new DocumentStore())
+{
+ // Define the connection string to Hugging Face
+ var connectionString = new AiConnectionString
+ {
+ // Connection string name & identifier
+ Name = "ConnectionStringToHuggingFace",
+ Identifier = "identifier-to-the-connection-string", // optional
+
+ // Model type
+ ModelType = AiModelType.TextEmbeddings,
+
+ // Hugging Face connection settings
+ HuggingFaceSettings = new HuggingFaceSettings(
+ apiKey: "your-api-key",
+ endpoint: "https://api-inference.huggingface.co/",
+ model: "sentence-transformers/all-MiniLM-L6-v2")
+ };
+
+ // Optionally, override the default maximum number of query embedding batches
+ // that can be processed concurrently
+ connectionString.HuggingFaceSettings.EmbeddingsMaxConcurrentBatches = 10;
+
+ // Deploy the connection string to the server
+ var operation = new PutConnectionStringOperation(connectionString);
+ var putConnectionStringResult = store.Maintenance.Send(operation);
+}
+```
+
+
+## Syntax
+
+
+```csharp
+public class AiConnectionString
+{
+ public string Name { get; set; }
+ public string Identifier { get; set; }
+ public AiModelType ModelType { get; set; }
+ public HuggingFaceSettings HuggingFaceSettings { get; set; }
+}
+
+public class HuggingFaceSettings : AbstractAiSettings
+{
+ public string ApiKey { get; set; }
+ public string Endpoint { get; set; }
+ public string Model { get; set; }
+}
+
+public class AbstractAiSettings
+{
+ public int? EmbeddingsMaxConcurrentBatches { get; set; }
+}
+```
+
diff --git a/versioned_docs/version-7.1/ai-integration/connection-strings/content/_mistral-ai-csharp.mdx b/versioned_docs/version-7.1/ai-integration/connection-strings/content/_mistral-ai-csharp.mdx
new file mode 100644
index 0000000000..22a32befca
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/connection-strings/content/_mistral-ai-csharp.mdx
@@ -0,0 +1,114 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* This article explains how to define a connection string to [Mistral AI](https://docs.mistral.ai/capabilities/embeddings/),
+ enabling RavenDB to seamlessly integrate its [Embeddings generation tasks](../../../ai-integration/generating-embeddings/overview.mdx) with Mistral's API.
+
+* Note: RavenDB currently supports only text embeddings with Mistral AI.
+ Chat models are not supported through this integration.
+
+* In this article:
+ * [Define the connection string - from the Studio](../../../ai-integration/connection-strings/mistral-ai.mdx#define-the-connection-string---from-the-studio)
+ * [Define the connection string - from the Client API](../../../ai-integration/connection-strings/mistral-ai.mdx#define-the-connection-string---from-the-client-api)
+ * [Syntax](../../../ai-integration/connection-strings/mistral-ai.mdx#syntax)
+
+
+
+## Define the connection string - from the Studio
+
+
+
+1. **Name**
+ Enter a name for this connection string.
+
+2. **Identifier** (optional)
+ Learn more about the identifier in the [connection string identifier](../../../ai-integration/connection-strings/connection-strings-overview.mdx#the-connection-string-identifier) section.
+
+3. **Model Type**
+ Select "Text Embeddings".
+
+4. **Connector**
+ Select **Mistral AI** from the dropdown menu.
+
+5. **API key**
+ Enter the API key used to authenticate requests to Mistral AI's API.
+
+6. **Endpoint**
+ Select or enter the Mistral AI endpoint for generating embeddings from text.
+
+7. **Model**
+ Select or enter the Mistral AI text embedding model to use.
+
+8. **Max concurrent query batches**: (optional)
+ * When making vector search queries, the content of the search terms must also be converted to embeddings to compare them against the stored vectors.
+ Requests to generate such query embeddings via the AI provider are sent in batches.
+ * This parameter defines the maximum number of these batches that can be processed concurrently.
+ You can set a default value using the [Ai.Embeddings.MaxConcurrentBatches](../../../server/configuration/ai-integration-configuration.mdx#aiembeddingsmaxconcurrentbatches) configuration key.
+
+9. Click **Test Connection** to confirm the connection string is set up correctly.
+
+10. Click **Save** to store the connection string or **Cancel** to discard changes.
+
+## Define the connection string - from the Client API
+
+
+```csharp
+using (var store = new DocumentStore())
+{
+ // Define the connection string to Mistral AI
+ var connectionString = new AiConnectionString
+ {
+ // Connection string name & identifier
+ Name = "ConnectionStringToMistralAI",
+ Identifier = "identifier-to-the-connection-string", // optional
+
+ // Model type
+ ModelType = AiModelType.TextEmbeddings,
+
+ // Mistral AI connection settings
+ MistralAiSettings = new MistralAiSettings(
+ apiKey: "your-api-key",
+ endpoint: "https://api.mistral.ai/v1",
+ model: "mistral-embed")
+ };
+
+ // Optionally, override the default maximum number of query embedding batches
+ // that can be processed concurrently
+ connectionString.MistralAiSettings.EmbeddingsMaxConcurrentBatches = 10;
+
+ // Deploy the connection string to the server
+ var operation = new PutConnectionStringOperation(connectionString);
+ var putConnectionStringResult = store.Maintenance.Send(operation);
+}
+```
+
+
+## Syntax
+
+
+```csharp
+public class AiConnectionString
+{
+ public string Name { get; set; }
+ public string Identifier { get; set; }
+ public AiModelType ModelType { get; set; }
+ public MistralAiSettings MistralAiSettings { get; set; }
+}
+
+public class MistralAiSettings : AbstractAiSettings
+{
+ public string ApiKey { get; set; }
+ public string Endpoint { get; set; }
+ public string Model { get; set; }
+}
+
+public class AbstractAiSettings
+{
+ public int? EmbeddingsMaxConcurrentBatches { get; set; }
+}
+```
+
diff --git a/versioned_docs/version-7.1/ai-integration/connection-strings/content/_ollama-csharp.mdx b/versioned_docs/version-7.1/ai-integration/connection-strings/content/_ollama-csharp.mdx
new file mode 100644
index 0000000000..6e19637fdc
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/connection-strings/content/_ollama-csharp.mdx
@@ -0,0 +1,210 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* This article explains how to define a connection string to [Ollama](https://ollama.com/blog/embedding-models),
+ enabling RavenDB to use Ollama models for [Embeddings generation tasks](../../../ai-integration/generating-embeddings/overview.mdx),
+ [Gen AI tasks](../../../ai-integration/gen-ai-integration/gen-ai-overview.mdx), and [AI agents](../../../ai-integration/ai-agents/ai-agents_overview.mdx).
+
+* In this article:
+ * [Define the connection string - from the Studio](../../../ai-integration/connection-strings/ollama.mdx#define-the-connection-string---from-the-studio)
+ * [Configuring a text embedding model](../../../ai-integration/connection-strings/ollama.mdx#configuring-a-text-embedding-model)
+ * [Configuring a chat model](../../../ai-integration/connection-strings/ollama.mdx#configuring-a-chat-model)
+ * [Define the connection string - from the Client API](../../../ai-integration/connection-strings/ollama.mdx#define-the-connection-string---from-the-client-api)
+ * [Syntax](../../../ai-integration/connection-strings/ollama.mdx#syntax)
+
+
+
+## Define the connection string - from the Studio
+
+### Configuring a text embedding model
+
+
+
+1. **Name**
+ Enter a name for this connection string.
+
+2. **Identifier** (optional)
+ Learn more about the identifier in the [connection string identifier](../../../ai-integration/connection-strings/connection-strings-overview.mdx#identifier) section.
+
+3. **Model Type**
+ Select "Text Embeddings".
+
+4. **Connector**
+ Select **Ollama** from the dropdown menu.
+
+5. **URI**
+ Enter the Ollama API URI.
+
+6. **Model**
+ Specify the Ollama text embedding model to use.
+
+7. **Max concurrent query batches**: (optional)
+ * When making vector search queries, the content of the search terms must also be converted to embeddings to compare them against the stored vectors.
+ Requests to generate such query embeddings via the AI provider are sent in batches.
+ * This parameter defines the maximum number of these batches that can be processed concurrently.
+ You can set a default value using the [Ai.Embeddings.MaxConcurrentBatches](../../../server/configuration/ai-integration-configuration.mdx#aiembeddingsmaxconcurrentbatches) configuration key.
+
+8. Click **Test Connection** to confirm the connection string is set up correctly.
+
+9. Click **Save** to store the connection string or **Cancel** to discard changes.
+
+### Configuring a chat model
+
+* When configuring a chat model, the UI displays the same base fields as those used for [text embedding models](../../../ai-integration/connection-strings/ollama.mdx#configuring-a-text-embedding-model),
+ including the connection string _Name_, optional _Identifier_, _URI_, and _Model_ name.
+
+* In addition, two fields are specific to chat models: _Temperature_ and _Thinking mode_.
+
+
+
+1. **Model Type**
+ Select "Chat".
+
+2. **Model**
+ Enter the name of the Ollama model to use for chat completions.
+
+3. **Thinking mode** (optional)
+ The thinking mode setting controls whether the model outputs its internal reasoning steps before returning the final answer.
+ * When setting to `Enabled`:
+ the model outputs a series of intermediate reasoning steps (chain of thought) before the final answer.
+ This may improve output quality for complex tasks, but increases response time and token usage.
+ * When setting to `Disabled`:
+ the model returns only the final answer, without exposing intermediate steps.
+ This is typically faster and more cost-effective (uses fewer tokens),
+ but may reduce quality on complex reasoning tasks.
+ * When setting to `Default`:
+ The model’s built-in default will be used.
+ This value may vary depending on the selected model.
+ Set this parameter based on the trade-off between task complexity and speed/cost requirements.
+
+4. **Temperature** (optional)
+ The temperature setting controls the randomness and creativity of the model’s output.
+ Valid values typically range from `0.0` to `2.0`:
+ * Higher values (e.g., `1.0` or above) produce more diverse and creative responses.
+ * Lower values (e.g., `0.2`) result in more focused, consistent, and deterministic output.
+ * If not explicitly set, Ollama defaults to a temperature of `0.8`.
+ See [Ollama's parameters reference](https://ollama.readthedocs.io/en/modelfile/?utm_source=chatgpt.com#valid-parameters-and-values).
+
+---
+
+## Define the connection string - from the Client API
+
+
+
+```csharp
+using (var store = new DocumentStore())
+{
+ // Define the connection string to Ollama
+ var connectionString = new AiConnectionString
+ {
+ // Connection string Name & Identifier
+ Name = "ConnectionStringToOllama",
+ Identifier = "identifier-to-the-connection-string", // optional
+
+ // Model type
+ ModelType = AiModelType.TextEmbeddings,
+
+ // Ollama connection settings
+ OllamaSettings = new OllamaSettings
+ {
+ Uri = "http://localhost:11434",
+
+ // Name of text embedding model to use
+ Model = "mxbai-embed-large",
+
+ // Optionally, override the default maximum number of query embedding batches
+ // that can be processed concurrently
+ EmbeddingsMaxConcurrentBatches = 10
+ }
+ };
+
+ // Deploy the connection string to the server
+ var putConnectionStringOp = new PutConnectionStringOperation(connectionString);
+ var putConnectionStringResult = store.Maintenance.Send(putConnectionStringOp);
+}
+```
+
+
+```csharp
+using (var store = new DocumentStore())
+{
+ // Define the connection string to Ollama
+ var connectionString = new AiConnectionString
+ {
+ // Connection string Name & Identifier
+ Name = "ConnectionStringToOllama",
+ Identifier = "identifier-to-the-connection-string", // optional
+
+ // Model type
+ ModelType = AiModelType.Chat,
+
+ // Ollama connection settings
+ OllamaSettings = new OllamaSettings
+ {
+ Uri = "http://localhost:11434",
+
+ // Name of chat model to use
+ Model = "llama3:8b-instruct",
+
+ // Optionally, set the model's temperature
+ Temperature = 0.4,
+
+ // Optionally, set the model's thinking behavior
+ Think = true
+ }
+ };
+
+ // Deploy the connection string to the server
+ var putConnectionStringOp = new PutConnectionStringOperation(connectionString);
+ var putConnectionStringResult = store.Maintenance.Send(putConnectionStringOp);
+}
+```
+
+
+
+## Syntax
+
+
+```csharp
+public class AiConnectionString
+{
+ public string Name { get; set; }
+ public string Identifier { get; set; }
+ public AiModelType ModelType { get; set; }
+ public OllamaSettings OllamaSettings { get; set; }
+}
+
+public class OllamaSettings : AbstractAiSettings
+{
+ // The base URI of your Ollama server
+ // For a local setup, use: "http://localhost:11434"
+ public string Uri { get; set; }
+
+ // The name of the model to use
+ public string Model { get; set; }
+
+ // Relevant only for chat models:
+ // Control whether the model outputs its internal reasoning steps before returning the final answer.
+ // 'true' - the model outputs intermediate reasoning steps (chain of thought) before the final answer.
+ // 'false' - the model returns only the final answer, without exposing intermediate steps.
+ // 'null' - the model’s default behavior is used.
+ public bool? Think { get; set; }
+
+ // Relevant only for chat models:
+ // Controls the randomness and creativity of the model’s output.
+ // Higher values (e.g., 1.0 or above) produce more diverse and creative responses.
+ // Lower values (e.g., 0.2) result in more focused and deterministic output.
+ // If set to 'null', the temperature is not sent and the model's default will be used.
+ public double? Temperature { get; set; }
+}
+
+public class AbstractAiSettings
+{
+ public int? EmbeddingsMaxConcurrentBatches { get; set; }
+}
+```
+
diff --git a/versioned_docs/version-7.1/ai-integration/connection-strings/content/_open-ai-csharp.mdx b/versioned_docs/version-7.1/ai-integration/connection-strings/content/_open-ai-csharp.mdx
new file mode 100644
index 0000000000..08a595c6ce
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/connection-strings/content/_open-ai-csharp.mdx
@@ -0,0 +1,217 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* This article explains how to define a connection string to the [OpenAI Service](https://platform.openai.com/docs/guides/embeddings),
+ enabling RavenDB to use OpenAI models for [Embeddings generation tasks](../../../ai-integration/generating-embeddings/overview.mdx),
+ [Gen AI tasks](../../../ai-integration/gen-ai-integration/gen-ai-overview.mdx), and [AI agents](../../../ai-integration/ai-agents/ai-agents_overview.mdx).
+
+* Use this connection string format to connect RavenDB to **any OpenAI-compatible provider** that offers a compatible API.
+ As long as the provider follows the OpenAI API format, RavenDB will be able to use it for Embeddings generation, Gen AI tasks, and chat-based agent interactions.
+
+* In this article:
+ * [Define the connection string - from the Studio](../../../ai-integration/connection-strings/open-ai.mdx#define-the-connection-string---from-the-studio)
+ * [Configuring a text embedding model](../../../ai-integration/connection-strings/open-ai.mdx#configuring-a-text-embedding-model)
+ * [Configuring a chat model](../../../ai-integration/connection-strings/open-ai.mdx#configuring-a-chat-model)
+ * [Define the connection string - from the Client API](../../../ai-integration/connection-strings/open-ai.mdx#define-the-connection-string---from-the-client-api)
+ * [Syntax](../../../ai-integration/connection-strings/open-ai.mdx#syntax)
+
+
+
+## Define the connection string - from the Studio
+
+### Configuring a text embedding model
+
+
+
+1. **Name**
+ Enter a name for this connection string.
+
+2. **Identifier** (optional)
+ Learn more about the identifier in the [connection string identifier](../../../ai-integration/connection-strings/connection-strings-overview.mdx#the-connection-string-identifier) section.
+
+3. **Model Type**
+ Select "Text Embeddings".
+
+4. **Connector**
+ Select **OpenAI** from the dropdown menu.
+
+5. **API key**
+ Enter the API key used to authenticate requests to OpenAI or any OpenAI-compatible provider.
+
+6. **Endpoint**
+ Enter the base URL of the OpenAI API.
+ This can be the standard OpenAI endpoint or a URL provided by any OpenAI-compatible provider.
+
+7. **Model**
+ Select or enter the text embedding model to use, as provided by OpenAI or any OpenAI-compatible provider.
+
+8. **Organization ID** (optional)
+ * Set the organization ID to use for the `OpenAI-Organization` request header.
+ * Users belonging to multiple organizations can set this value to specify which organization is used for an API request.
+ Usage from these API requests will count against the specified organization's quota.
+ * If not specified, the header will be omitted, and the default organization will be billed.
+ You can change your default organization in your user settings.
+ * Learn more in [Setting up your organization](https://platform.openai.com/docs/guides/production-best-practices/setting-up-your-organization#setting-up-your-organization)
+
+9. **Project ID** (optional)
+ * Set the project ID to use for the `OpenAI-Project` request header.
+ * Users who are accessing their projects through their legacy user API key can set this value to specify which project is used for an API request.
+ Usage from these API requests will count as usage for the specified project.
+ * If not specified, the header will be omitted, and the default project will be accessed.
+
+10. **Dimensions** (optional)
+ * Specify the number of dimensions for the output embeddings.
+ Supported only by _text-embedding-3_ and later models.
+ * If not specified, the model's default dimensionality is used.
+
+11. **Max concurrent query batches**: (optional)
+ * When making vector search queries, the content of the search terms must also be converted to embeddings to compare them against the stored vectors.
+ Requests to generate such query embeddings via the AI provider are sent in batches.
+ * This parameter defines the maximum number of these batches that can be processed concurrently.
+ You can set a default value using the [Ai.Embeddings.MaxConcurrentBatches](../../../server/configuration/ai-integration-configuration.mdx#aiembeddingsmaxconcurrentbatches) configuration key.
+
+12. Click **Test Connection** to confirm the connection string is set up correctly.
+
+13. Click **Save** to store the connection string or **Cancel** to discard changes.
+
+### Configuring a chat model
+
+* When configuring a chat model, the UI displays the same base fields as those used for [text embedding models](../../../ai-integration/connection-strings/open-ai.mdx#configuring-a-text-embedding-mode),
+ including the connection string _Name_, optional _Identifier_, _API Key_, _Endpoint_, _Model_ name, _Organization ID_, and _Project ID_.
+
+* One additional setting is specific to chat models: _Temperature_.
+
+
+
+1. **Model Type**
+ Select "Chat".
+
+2. **Model**
+ Enter the name of the OpenAI model to use for chat completions.
+
+3. **Temperature** (optional)
+ The temperature setting controls the randomness and creativity of the model’s output.
+ Valid values typically range from `0.0` to `2.0`:
+ * Higher values (e.g., `1.0` or above) produce more diverse and creative responses.
+ * Lower values (e.g., `0.2`) result in more focused, consistent, and deterministic output.
+ * If not explicitly set, OpenAI uses a default temperature of `1.0`.
+ See [OpenAI chat completions parameters](https://platform.openai.com/docs/api-reference/chat/create#chat_create-temperature).
+
+---
+
+## Define the connection string - from the Client API
+
+
+
+```csharp
+using (var store = new DocumentStore())
+{
+ // Define the connection string to OpenAI
+ var connectionString = new AiConnectionString
+ {
+ // Connection string Name & Identifier
+ Name = "ConnectionStringToOpenAI",
+ Identifier = "identifier-to-the-connection-string", // optional
+
+ // Model type
+ ModelType = AiModelType.TextEmbeddings,
+
+ // OpenAI connection settings
+ OpenAiSettings = new OpenAiSettings
+ {
+ ApiKey = "your-api-key",
+ Endpoint = "https://api.openai.com/v1",
+
+ // Name of text embedding model to use
+ Model = "text-embedding-3-small",
+
+ // Optionally, override the default maximum number of query embedding batches
+ // that can be processed concurrently
+ EmbeddingsMaxConcurrentBatches = 10
+ }
+ };
+
+ // Deploy the connection string to the server
+ var putConnectionStringOp = new PutConnectionStringOperation(connectionString);
+ var putConnectionStringResult = store.Maintenance.Send(putConnectionStringOp);
+}
+```
+
+
+```csharp
+using (var store = new DocumentStore())
+{
+ // Define the connection string to OpenAI
+ var connectionString = new AiConnectionString
+ {
+ // Connection string Name & Identifier
+ Name = "ConnectionStringToOpenAI",
+ Identifier = "identifier-to-the-connection-string", // optional
+
+ // Model type
+ ModelType = AiModelType.Chat,
+
+ // OpenAI connection settings
+ OpenAiSettings = new OpenAiSettings
+ {
+ ApiKey = "your-api-key",
+ Endpoint = "https://api.openai.com/v1",
+
+ // Name of text embedding model to use
+ Model = "gpt-4o",
+
+ // Optionally, set the model's temperature
+ Temperature = 0.4
+ }
+ };
+
+ // Deploy the connection string to the server
+ var putConnectionStringOp = new PutConnectionStringOperation(connectionString);
+ var putConnectionStringResult = store.Maintenance.Send(putConnectionStringOp);
+}
+```
+
+
+
+## Syntax
+
+
+```csharp
+public class AiConnectionString
+{
+ public string Name { get; set; }
+ public string Identifier { get; set; }
+ public AiModelType ModelType { get; set; }
+ public OpenAiSettings OpenAiSettings { get; set; }
+}
+
+public class OpenAiSettings : AbstractAiSettings
+{
+ public string ApiKey { get; set; }
+ public string Endpoint { get; set; }
+ public string Model { get; set; }
+ public string OrganizationId { get; set; }
+ public string ProjectId { get; set; }
+
+ // Relevant only for text embedding models:
+ // Specifies the number of dimensions in the generated embedding vectors.
+ public int? Dimensions { get; set; }
+
+ // Relevant only for chat models:
+ // Controls the randomness and creativity of the model’s output.
+ // Higher values (e.g., 1.0 or above) produce more diverse and creative responses.
+ // Lower values (e.g., 0.2) result in more focused and deterministic output.
+ // If set to 'null', the temperature is not sent and the model's default will be used.
+ public double? Temperature { get; set; }
+}
+
+public class AbstractAiSettings
+{
+ public int? EmbeddingsMaxConcurrentBatches { get; set; }
+}
+```
+
diff --git a/versioned_docs/version-7.1/ai-integration/connection-strings/content/_vertex-ai-csharp.mdx b/versioned_docs/version-7.1/ai-integration/connection-strings/content/_vertex-ai-csharp.mdx
new file mode 100644
index 0000000000..b32a13e501
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/connection-strings/content/_vertex-ai-csharp.mdx
@@ -0,0 +1,152 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* This article explains how to define a connection string to [Vertex AI](https://cloud.google.com/vertex-ai/docs/generative-ai/embeddings/get-text-embeddings),
+ enabling RavenDB to seamlessly integrate its [Embeddings generation tasks](../../../ai-integration/generating-embeddings/overview.mdx) with Google Cloud’s Vertex AI services.
+
+* This configuration supports **Vertex AI embeddings** only.
+ It is not compatible with Google AI (Gemini API) endpoints or API key authentication.
+
+* RavenDB currently supports only text embeddings with Vertex AI.
+ Chat models are not supported through this integration.
+
+* In this article:
+ * [Define the connection string - from the Studio](../../../ai-integration/connection-strings/vertex-ai.mdx#define-the-connection-string---from-the-studio)
+ * [Define the connection string - from the Client API](../../../ai-integration/connection-strings/vertex-ai.mdx#define-the-connection-string---from-the-client-api)
+ * [Syntax](../../../ai-integration/connection-strings/vertex-ai.mdx#syntax)
+
+
+
+## Define the connection string - from the Studio
+
+
+
+1. **Name**
+ Enter a name for this connection string.
+
+2. **Identifier** (optional)
+ Enter an identifier for this connection string.
+ Learn more about the identifier in the [connection string identifier](../../../ai-integration/connection-strings/connection-strings-overview.mdx#the-connection-string-identifier) section.
+
+3. **Model Type**
+ Select "Text Embeddings".
+
+4. **Connector**
+ Select **Vertex AI** from the dropdown menu.
+
+5. **AI Version** (optional)
+ * Select the Vertex AI version to use.
+ * If not specified, `V1_Beta` is used.
+ Learn more in the [Vertex AI REST API reference](https://cloud.google.com/vertex-ai/docs/reference/rest).
+
+6. **Google Credentials Json**
+ Click "Show credentials" to enter your Google Cloud credentials in JSON format.
+ These credentials are used to authenticate requests to Vertex AI services.
+ To generate this JSON, follow the steps in [Google's guide to creating service account credentials](https://developers.google.com/workspace/guides/create-credentials#service-account).
+
+ Example:
+
+
+ ```json
+ {
+ "type": "service_account",
+ "project_id": "test-raven-237012",
+ "private_key_id": "12345678123412341234123456789101",
+ "private_key": "-----BEGIN PRIVATE KEY-----\\abCse=-----END PRIVATE KEY-----",
+ "client_email": "raven@test-raven-237012-237012.iam.gserviceaccount.com",
+ "client_id": "111390682349634407434",
+ "auth_uri": "https://accounts.google.com/o/oauth2/auth",
+ "token_uri": "https://oauth2.googleapis.com/token",
+ "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
+ "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/viewonly%40test-raven-237012.iam.gserviceaccount.com"
+ }
+ ```
+
+
+7. **Model**
+ Select or enter the Vertex AI text embedding model to use.
+
+8. **Location**
+ The Google Cloud region where the Vertex AI model is hosted (e.g., _us-central1_).
+
+9. **Max concurrent query batches**: (optional)
+ * When making vector search queries, the content of the search terms must also be converted to embeddings to compare them against the stored vectors.
+ Requests to generate such query embeddings via the AI provider are sent in batches.
+ * This parameter defines the maximum number of these batches that can be processed concurrently.
+ You can set a default value using the [Ai.Embeddings.MaxConcurrentBatches](../../../server/configuration/ai-integration-configuration.mdx#aiembeddingsmaxconcurrentbatches) configuration key.
+
+10. Click **Test Connection** to confirm the connection string is set up correctly.
+
+11. Click **Save** to store the connection string or **Cancel** to discard changes.
+
+## Define the connection string - from the Client API
+
+
+```csharp
+using (var store = new DocumentStore())
+{
+ // Define the connection string to Vertex AI
+ var connectionString = new AiConnectionString
+ {
+ // Connection string name & identifier
+ Name = "ConnectionStringToVertexAI",
+ Identifier = "identifier-to-the-connection-string", // optional
+
+ // Model type
+ ModelType = AiModelType.TextEmbeddings,
+
+ // Vertex AI connection settings
+ VertexSettings = new VertexSettings(
+ model: "text‑embedding‑005", // Name of the Vertex AI model to use
+ googleCredentialsJson: "{...}", // Contents of your service account JSON file
+ location: "us-central1", // Region where the model is hosted
+ aiVersion: VertexAIVersion.V1) // Optional: specify V1 or V1_Beta
+ };
+
+ // Optionally, override the default maximum number of query embedding batches
+ // that can be processed concurrently
+ connectionString.GoogleSettings.EmbeddingsMaxConcurrentBatches = 10;
+
+ // Deploy the connection string to the server
+ var operation = new PutConnectionStringOperation(connectionString);
+ var putConnectionStringResult = store.Maintenance.Send(operation);
+}
+```
+
+
+## Syntax
+
+
+```csharp
+public class AiConnectionString
+{
+ public string Name { get; set; }
+ public string Identifier { get; set; }
+ public AiModelType ModelType { get; set; }
+ public VertexSettings VertexSettings { get; set; }
+}
+
+public class VertexSettings : AbstractAiSettings
+{
+ public string Model { get; set; }
+ public string GoogleCredentialsJson { get; set; }
+ public string Location { get; set; }
+ public VertexAIVersion? AiVersion { get; set; }
+}
+
+public enum VertexAIVersion
+{
+ V1, // Represents the "V1" version of the Vertex AI API.
+ V1_Beta // Represents the "V1 beta" version of the Vertex AI API.
+}
+
+public class AbstractAiSettings
+{
+ public int? EmbeddingsMaxConcurrentBatches { get; set; }
+}
+```
+
diff --git a/versioned_docs/version-7.1/ai-integration/connection-strings/embedded.mdx b/versioned_docs/version-7.1/ai-integration/connection-strings/embedded.mdx
new file mode 100644
index 0000000000..6c64d1df75
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/connection-strings/embedded.mdx
@@ -0,0 +1,33 @@
+---
+title: "Connection String to bge-micro-v2 (Embedded)"
+hide_table_of_contents: true
+sidebar_label: bge-micro-v2 (Embedded)
+sidebar_position: 8
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import EmbeddedCsharp from './content/_embedded-csharp.mdx';
+
+export const supportedLanguages = ["csharp"];
+
+
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/ai-integration/connection-strings/google-ai.mdx b/versioned_docs/version-7.1/ai-integration/connection-strings/google-ai.mdx
new file mode 100644
index 0000000000..5812dd87c7
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/connection-strings/google-ai.mdx
@@ -0,0 +1,33 @@
+---
+title: "Connection String to Google AI"
+hide_table_of_contents: true
+sidebar_label: Google AI
+sidebar_position: 6
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import GoogleAiCsharp from './content/_google-ai-csharp.mdx';
+
+export const supportedLanguages = ["csharp"];
+
+
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/ai-integration/connection-strings/hugging-face.mdx b/versioned_docs/version-7.1/ai-integration/connection-strings/hugging-face.mdx
new file mode 100644
index 0000000000..3c8dd6bed7
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/connection-strings/hugging-face.mdx
@@ -0,0 +1,33 @@
+---
+title: "Connection String to Hugging Face"
+hide_table_of_contents: true
+sidebar_label: Hugging Face
+sidebar_position: 3
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import HuggingFaceCsharp from './content/_hugging-face-csharp.mdx';
+
+export const supportedLanguages = ["csharp"];
+
+
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/ai-integration/connection-strings/mistral-ai.mdx b/versioned_docs/version-7.1/ai-integration/connection-strings/mistral-ai.mdx
new file mode 100644
index 0000000000..fbd5d11c65
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/connection-strings/mistral-ai.mdx
@@ -0,0 +1,33 @@
+---
+title: "Connection String to Mistral AI"
+hide_table_of_contents: true
+sidebar_label: Mistral AI
+sidebar_position: 5
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import MistralAiCsharp from './content/_mistral-ai-csharp.mdx';
+
+export const supportedLanguages = ["csharp"];
+
+
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/ai-integration/connection-strings/ollama.mdx b/versioned_docs/version-7.1/ai-integration/connection-strings/ollama.mdx
new file mode 100644
index 0000000000..ce6ed2238d
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/connection-strings/ollama.mdx
@@ -0,0 +1,33 @@
+---
+title: "Connection String to Ollama"
+hide_table_of_contents: true
+sidebar_label: Ollama
+sidebar_position: 4
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import OllamaCsharp from './content/_ollama-csharp.mdx';
+
+export const supportedLanguages = ["csharp"];
+
+
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/ai-integration/connection-strings/open-ai.mdx b/versioned_docs/version-7.1/ai-integration/connection-strings/open-ai.mdx
new file mode 100644
index 0000000000..4acddfe3e8
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/connection-strings/open-ai.mdx
@@ -0,0 +1,33 @@
+---
+title: "Connection String to OpenAI and OpenAI-Compatible Providers"
+hide_table_of_contents: true
+sidebar_label: OpenAI & Compatible Providers
+sidebar_position: 2
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import OpenAiCsharp from './content/_open-ai-csharp.mdx';
+
+export const supportedLanguages = ["csharp"];
+
+
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/ai-integration/connection-strings/vertex-ai.mdx b/versioned_docs/version-7.1/ai-integration/connection-strings/vertex-ai.mdx
new file mode 100644
index 0000000000..540e396a4b
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/connection-strings/vertex-ai.mdx
@@ -0,0 +1,33 @@
+---
+title: "Connection String to Vertex AI"
+hide_table_of_contents: true
+sidebar_label: Vertex AI
+sidebar_position: 7
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import GoogleAiCsharp from './content/_vertex-ai-csharp.mdx';
+
+export const supportedLanguages = ["csharp"];
+
+
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/_category_.json b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/_category_.json
new file mode 100644
index 0000000000..6970b8e6ed
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/_category_.json
@@ -0,0 +1,4 @@
+{
+ "position": 2,
+ "label": "GenAI Integration"
+}
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/assets/article-cover-genai.webp b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/assets/article-cover-genai.webp
new file mode 100644
index 0000000000..3711eca3d2
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/assets/article-cover-genai.webp differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/assets/gen-ai_overview_hash-flow.png b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/assets/gen-ai_overview_hash-flow.png
new file mode 100644
index 0000000000..f606f7a99c
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/assets/gen-ai_overview_hash-flow.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/assets/gen-ai_overview_licensing.png b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/assets/gen-ai_overview_licensing.png
new file mode 100644
index 0000000000..044434cf2a
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/assets/gen-ai_overview_licensing.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/assets/gen-ai_overview_metadata.png b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/assets/gen-ai_overview_metadata.png
new file mode 100644
index 0000000000..a1f485b5c0
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/assets/gen-ai_overview_metadata.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/assets/gen-ai_start_api-image.png b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/assets/gen-ai_start_api-image.png
new file mode 100644
index 0000000000..b43a888381
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/assets/gen-ai_start_api-image.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/assets/gen-ai_start_ov-image.png b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/assets/gen-ai_start_ov-image.png
new file mode 100644
index 0000000000..0f240ad3b5
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/assets/gen-ai_start_ov-image.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/assets/gen-ai_start_studio-image.png b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/assets/gen-ai_start_studio-image.png
new file mode 100644
index 0000000000..612fb0c120
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/assets/gen-ai_start_studio-image.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/assets/snagit/gen-ai_overview_hash-flow.snagx b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/assets/snagit/gen-ai_overview_hash-flow.snagx
new file mode 100644
index 0000000000..847b6d2e54
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/assets/snagit/gen-ai_overview_hash-flow.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/assets/snagit/gen-ai_overview_metadata.snagx b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/assets/snagit/gen-ai_overview_metadata.snagx
new file mode 100644
index 0000000000..91ad8c67f5
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/assets/snagit/gen-ai_overview_metadata.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/assets/unlock-genai-potential-article-image.webp b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/assets/unlock-genai-potential-article-image.webp
new file mode 100644
index 0000000000..7abb8a04e8
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/assets/unlock-genai-potential-article-image.webp differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/_category_.json b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/_category_.json
new file mode 100644
index 0000000000..6f210621b6
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/_category_.json
@@ -0,0 +1,4 @@
+{
+ "position": 2,
+ "label": "Create GenAI Task"
+}
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/gen-ai_add-GenAI-task.png b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/gen-ai_add-GenAI-task.png
new file mode 100644
index 0000000000..6ca3125eb3
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/gen-ai_add-GenAI-task.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/gen-ai_configure-basic-settings.png b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/gen-ai_configure-basic-settings.png
new file mode 100644
index 0000000000..073efd9537
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/gen-ai_configure-basic-settings.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/gen-ai_define-prompt-and-json-schema.png b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/gen-ai_define-prompt-and-json-schema.png
new file mode 100644
index 0000000000..44c925f415
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/gen-ai_define-prompt-and-json-schema.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/gen-ai_generate-context-objects.png b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/gen-ai_generate-context-objects.png
new file mode 100644
index 0000000000..07dba7cc9d
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/gen-ai_generate-context-objects.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/gen-ai_hash-flow.png b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/gen-ai_hash-flow.png
new file mode 100644
index 0000000000..b78423a9ec
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/gen-ai_hash-flow.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/gen-ai_licensing.png b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/gen-ai_licensing.png
new file mode 100644
index 0000000000..044434cf2a
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/gen-ai_licensing.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/gen-ai_metadata-identifier-and-hash-codes.png b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/gen-ai_metadata-identifier-and-hash-codes.png
new file mode 100644
index 0000000000..9784820ac8
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/gen-ai_metadata-identifier-and-hash-codes.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/gen-ai_ollama-connection-string.png b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/gen-ai_ollama-connection-string.png
new file mode 100644
index 0000000000..9d18cccc93
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/gen-ai_ollama-connection-string.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/gen-ai_playground-generated-context-objects.png b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/gen-ai_playground-generated-context-objects.png
new file mode 100644
index 0000000000..bd71b2dac0
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/gen-ai_playground-generated-context-objects.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/gen-ai_playground-prompt-and-json-schema.png b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/gen-ai_playground-prompt-and-json-schema.png
new file mode 100644
index 0000000000..8da50d9727
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/gen-ai_playground-prompt-and-json-schema.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/gen-ai_playground-provide-update-script.png b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/gen-ai_playground-provide-update-script.png
new file mode 100644
index 0000000000..8c2a6ad150
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/gen-ai_playground-provide-update-script.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/gen-ai_provide-update-script.png b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/gen-ai_provide-update-script.png
new file mode 100644
index 0000000000..efda6e361b
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/gen-ai_provide-update-script.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/gen-ai_review-task-configuration.png b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/gen-ai_review-task-configuration.png
new file mode 100644
index 0000000000..4c396decf8
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/gen-ai_review-task-configuration.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/gen-ai_select-ai-task-type.png b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/gen-ai_select-ai-task-type.png
new file mode 100644
index 0000000000..4dbd80ccc5
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/gen-ai_select-ai-task-type.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/snagit/gen-ai_add-GenAI-task.snagx b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/snagit/gen-ai_add-GenAI-task.snagx
new file mode 100644
index 0000000000..663a378158
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/snagit/gen-ai_add-GenAI-task.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/snagit/gen-ai_configure-basic-settings.snagx b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/snagit/gen-ai_configure-basic-settings.snagx
new file mode 100644
index 0000000000..495f5903fc
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/snagit/gen-ai_configure-basic-settings.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/snagit/gen-ai_define-prompt-and-json-schema.snagx b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/snagit/gen-ai_define-prompt-and-json-schema.snagx
new file mode 100644
index 0000000000..8a81211ba3
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/snagit/gen-ai_define-prompt-and-json-schema.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/snagit/gen-ai_generate-context-objects.snagx b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/snagit/gen-ai_generate-context-objects.snagx
new file mode 100644
index 0000000000..288e06443b
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/snagit/gen-ai_generate-context-objects.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/snagit/gen-ai_hash-flow.snagx b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/snagit/gen-ai_hash-flow.snagx
new file mode 100644
index 0000000000..7e6eff9d12
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/snagit/gen-ai_hash-flow.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/snagit/gen-ai_licensing.snagx b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/snagit/gen-ai_licensing.snagx
new file mode 100644
index 0000000000..cb3a8439a6
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/snagit/gen-ai_licensing.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/snagit/gen-ai_metadata-identifier-and-hash-codes.snagx b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/snagit/gen-ai_metadata-identifier-and-hash-codes.snagx
new file mode 100644
index 0000000000..f9b564b644
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/snagit/gen-ai_metadata-identifier-and-hash-codes.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/snagit/gen-ai_ollama-connection-string.snagx b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/snagit/gen-ai_ollama-connection-string.snagx
new file mode 100644
index 0000000000..fbd6c6f6b4
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/snagit/gen-ai_ollama-connection-string.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/snagit/gen-ai_playground-generated-context-objects.snagx b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/snagit/gen-ai_playground-generated-context-objects.snagx
new file mode 100644
index 0000000000..c7c56e0b80
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/snagit/gen-ai_playground-generated-context-objects.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/snagit/gen-ai_playground-prompt-and-json-schema.snagx b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/snagit/gen-ai_playground-prompt-and-json-schema.snagx
new file mode 100644
index 0000000000..1bb5dee5ca
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/snagit/gen-ai_playground-prompt-and-json-schema.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/snagit/gen-ai_playground-provide-update-script.snagx b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/snagit/gen-ai_playground-provide-update-script.snagx
new file mode 100644
index 0000000000..43e2c0fbfe
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/snagit/gen-ai_playground-provide-update-script.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/snagit/gen-ai_provide-update-script.snagx b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/snagit/gen-ai_provide-update-script.snagx
new file mode 100644
index 0000000000..ba6b0ab7c1
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/snagit/gen-ai_provide-update-script.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/snagit/gen-ai_review-task-configuration.snagx b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/snagit/gen-ai_review-task-configuration.snagx
new file mode 100644
index 0000000000..4a18f8389d
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/snagit/gen-ai_review-task-configuration.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/snagit/gen-ai_select-ai-task-type.snagx b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/snagit/gen-ai_select-ai-task-type.snagx
new file mode 100644
index 0000000000..986b78d129
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/assets/snagit/gen-ai_select-ai-task-type.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/create-gen-ai-task_api.mdx b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/create-gen-ai-task_api.mdx
new file mode 100644
index 0000000000..3b1738efff
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/create-gen-ai-task_api.mdx
@@ -0,0 +1,485 @@
+---
+title: "Create GenAI Task: API"
+hide_table_of_contents: true
+sidebar_label: Client API
+sidebar_position: 1
+---
+
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+# Create GenAI Task: API
+
+
+
+* A GenAI task leverages an AI model to enable intelligent processing of documents in runtime.
+ * The task is associated with a document collection and with an AI model.
+ * It is an **ongoing task** that:
+ 1. Continuously monitors the collection;
+ 2. Whenever needed, like when a document is added to the collection, generates
+ user-defined context objects based on the source document data;
+ 3. Passes each context object to the AI model for further processing;
+ 4. Receives the AI model's JSON-based results;
+ 5. And finally, runs a user-defined script that potentially acts upon the results.
+
+* The main steps in defining a GenAI task are:
+ * Defining a [Connection string](../../../ai-integration/gen-ai-integration/create-gen-ai-task/create-gen-ai-task_api#defining-a-connection-string)
+ to the AI model
+ * Defining a [Context generation script](../../../ai-integration/gen-ai-integration/gen-ai-overview#the-elements_context-objects)
+ * Defining a [Prompt](../../../ai-integration/gen-ai-integration/gen-ai-overview#the-elements_prompt)
+ * Defining a [JSON schema](../../../ai-integration/gen-ai-integration/gen-ai-overview#the-elements_json-schema)
+ * Defining an [Update script](../../../ai-integration/gen-ai-integration/gen-ai-overview#the-elements_update-script)
+
+* In this article:
+ * [Defining a Connection string](../../../ai-integration/gen-ai-integration/create-gen-ai-task/create-gen-ai-task_api#defining-a-connection-string)
+ * [Defining the GenAI task](../../../ai-integration/gen-ai-integration/create-gen-ai-task/create-gen-ai-task_api#defining-the-genai-task)
+ * [Full example](../../../ai-integration/gen-ai-integration/create-gen-ai-task/create-gen-ai-task_api#full-example)
+
+
+
+
+
+
+## Defining a Connection string
+
+* Choose the model to connect with, by what you need from your GenAI task.
+ E.g., If you require security and speed above all for the duration of a rapid
+ development phase, you may prefer a local AI service like [Ollama](../../../ai-integration/connection-strings/ollama).
+* Make sure you define the correct service: both Ollama and OpenAI are supported
+ but you need to pick an Ollama/OpenAI service that supports generative AI,
+ like Ollama `llama3.2` or OpenAI `gpt-4o-mini`.
+* Learn more about connection strings [here](../../../ai-integration/connection-strings/connection-strings-overview).
+
+### Example:
+
+
+
+
+```csharp
+using (var store = new DocumentStore())
+{
+ // Define the connection string to OpenAI
+ var connectionString = new AiConnectionString
+ {
+ // Connection string name & identifier
+ Name = "open-ai-cs",
+
+ // Connection type
+ ModelType = AiModelType.Chat,
+
+ // OpenAI connection settings
+ OpenAiSettings = new OpenAiSettings(
+ apiKey: "your-api-key",
+ endpoint: "https://api.openai.com/v1",
+ // text generation model
+ model: "gpt-4o-mini")
+ };
+
+ // Deploy the connection string to the server
+ var operation = new PutConnectionStringOperation(connectionString);
+ var putConnectionStringResult = store.Maintenance.Send(operation);
+}
+```
+
+
+
+```csharp
+using (var store = new DocumentStore())
+{
+ // Define the connection string to Ollama
+ var connectionString = new AiConnectionString
+ {
+ // Connection string name & identifier
+ Name = "ollama-cs",
+
+ // Connection type
+ ModelType = AiModelType.Chat,
+
+ // Ollama connection settings
+ OllamaSettings = new OllamaSettings(
+ // LLM model for text generation
+ model: "llama3.2",
+ // local URL
+ uri: "http://localhost:11434/")
+ };
+
+ // Deploy the connection string to the server
+ var operation = new PutConnectionStringOperation(connectionString);
+ var putConnectionStringResult = store.Maintenance.Send(operation);
+}
+```
+
+
+
+### Syntax:
+
+
+
+```csharp
+public class AiConnectionString
+{
+ public string Name { get; set; }
+ public AiModelType ModelType { get; set; }
+ public string Identifier { get; set; }
+ public OpenAiSettings OpenAiSettings { get; set; }
+ ...
+}
+
+public class OpenAiSettings : AbstractAiSettings
+{
+ public string ApiKey { get; set; }
+ public string Endpoint { get; set; }
+ public string Model { get; set; }
+ public int? Dimensions { get; set; }
+ public string OrganizationId { get; set; }
+ public string ProjectId { get; set; }
+}
+```
+
+
+
+```csharp
+public class AiConnectionString
+{
+ public string Name { get; set; }
+ public AiModelType ModelType { get; set; }
+ public string Identifier { get; set; }
+ public OllamaSettings OllamaSettings { get; set; }
+ ...
+}
+
+public class OllamaSettings : AbstractAiSettings
+{
+ public string Model { get; set; }
+ public string Uri { get; set; }
+}
+```
+
+
+
+
+
+## Defining the GenAI task
+
+* Define a GenAI task using a `GenAiConfiguration` object.
+* Run the task using `AddGenAiOperation`.
+
+
+
+
+```csharp
+// Define a GenAI task configuration
+GenAiConfiguration config = new GenAiConfiguration
+{
+ // Task name
+ Name = "spam-filter",
+
+ // Unique user-defined task identifier
+ Identifier = "spam-filter",
+
+ // Connection string to AI model
+ ConnectionStringName = "open-ai-cs",
+
+ // Task is enabled
+ Disabled = false,
+
+ // Collection associated with the task
+ Collection = "Posts",
+
+ // Context generation script - format for objects to be sent to the AI model
+ GenAiTransformation = new GenAiTransformation
+ {
+ Script = @"
+ for(const comment of this.Comments)
+ {
+ ai.genContext({Text: comment.Text, Author: comment.Author, Id: comment.Id});}"
+ },
+
+ // AI model Prompt - the instructions sent to the AI model
+ Prompt = @"
+ Check if the following blog post comment is spam or not.
+ A spam comment typically includes irrelevant or promotional content,
+ excessive links, misleading information, or is written with the intent
+ to manipulate search engines or advertise products/services.
+ Consider the language, intent, and relevance of the comment for
+ the blog post content.",
+
+ // Sample object - the layout for the AI model's response
+ SampleObject = @"
+ {
+ ""Blocked"": true,
+ ""Reason"": ""Concise reason for why this comment was marked as spam or ham""
+ }",
+
+ // Update script - specifies what to do with AI model replies.
+ // Use $input to access the context object that was sent to the AI model.
+ // Use $output` to access the results object returned from the AI model.
+ // Use `this` to access and modify the currently processed document.
+ UpdateScript = @"
+ // Find the comment
+ const idx = this.Comments.findIndex(c => c.Id == $input.Id);
+ // Was detected as spam
+ if($output.Blocked)
+ {
+ // Remove this comment
+ this.Comments.splice(idx, 1);
+ }",
+
+ // Max concurrent connections to AI model
+ MaxConcurrency = 4
+};
+
+// Run the task
+var GenAiOperation = new AddGenAiOperation(config);
+var addAiIntegrationTaskResult = store.Maintenance.Send(GenAiOperation);
+```
+
+
+
+```csharp
+// Define a GenAI task configuration
+GenAiConfiguration config = new GenAiConfiguration
+{
+ // Task name
+ Name = "spam-filter",
+
+ // Unique user-defined task identifier
+ Identifier = "spam-filter",
+
+ // Connection string to AI model
+ ConnectionStringName = "open-ai-cs",
+
+ // Task is enabled
+ Disabled = false,
+
+ // Collection associated with the task
+ Collection = "Posts",
+
+ // Context generation script - format for objects to be sent to the AI model
+ GenAiTransformation = new GenAiTransformation
+ {
+ Script = @"
+ for(const comment of this.Comments)
+ {
+ ai.genContext({Text: comment.Text, Author: comment.Author, Id: comment.Id});}"
+ },
+
+ // AI model Prompt - the instructions sent to the AI model
+ Prompt = @"
+ Check if the following blog post comment is spam or not.
+ A spam comment typically includes irrelevant or promotional content,
+ excessive links, misleading information, or is written with the intent
+ to manipulate search engines or advertise products/services.
+ Consider the language, intent, and relevance of the comment for
+ the blog post content.",
+
+ // JSON schema - a schema to format the AI model's replies by
+ JsonSchema = @"{
+ ""name"": """ + "some-name" + @""",
+ ""strict"": true,
+ ""schema"": {
+ ""type"": ""object"",
+ ""properties"": {
+ ""Blocked"": {
+ ""type"": ""boolean""
+ },
+ ""Reason"": {
+ ""type"": ""string"",
+ ""description"": ""Concise reason for why this comment was marked as spam or ham""
+ }
+ },
+ ""required"": [
+ ""Blocked"",
+ ""Reason""
+ ],
+ ""additionalProperties"": false
+ }
+ }",
+
+ // Update script - specifies what to do with AI model replies.
+ // Use $input to access the context object that was sent to the AI model.
+ // Use $output` to access the results object returned from the AI model.
+ // Use `this` to access and modify the currently processed document.
+ UpdateScript = @"
+ // Find the comment
+ const idx = this.Comments.findIndex(c => c.Id == $input.Id);
+ // Was detected as spam
+ if($output.Blocked)
+ {
+ // Remove this comment
+ this.Comments.splice(idx, 1);
+ }",
+
+ // Max concurrent connections to AI model
+ MaxConcurrency = 4
+};
+
+// Run the task
+var GenAiOperation = new AddGenAiOperation(config);
+var addAiIntegrationTaskResult = store.Maintenance.Send(GenAiOperation);
+```
+
+
+
+### `GenAiConfiguration`
+
+| Parameters | Type | Description |
+| ------------- | ------------- | ----- |
+| **Name** | `string` | Task name |
+| **Identifier** | `string` | Unique user-defined task identifier Use only lowercase letters, numbers, and hyphens |
+| **ConnectionStringName** | `string` | Connection string name |
+| **Disabled** | `bool` | Determines whether the task is enabled or disabled |
+| **Collection** | `string` | Name of the document collection associated with the task |
+| **GenAiTransformation** | `GenAiTransformation` | Context generation script - format for objects to be sent to the AI model |
+| **Prompt** | `string` | AI model Prompt - the instructions sent to the AI model |
+| **SampleObject** | `string` | A [sample response object](../../../ai-integration/gen-ai-integration/gen-ai-overview#the-elements_json-schema) to format the AI model's replies by If both a `SampleObject` and a `JsonSchema` are provided the schema takes precedence |
+| **JsonSchema** | `string` | A [JSON schema](../../../ai-integration/gen-ai-integration/gen-ai-overview#the-elements_json-schema) to format the AI model's replies by If both a `SampleObject` and a `JsonSchema` are provided the schema takes precedence |
+| **UpdateScript** | `string` | Update script - specifies what to do with AI model replies |
+| **MaxConcurrency** | `int` | Max concurrent connections to the AI model (each connection serving a single context object) |
+
+
+
+## Full example
+
+The following example demonstrates how to define a GenAI task that removes spam comments from blog posts.
+
+After creating a connection string to the AI model, the we define a GenAI task that:
+1. Monitors the `Posts` collection.
+2. For each document, generates a context object per each comment in the `Comments` array.
+3. Sends each context object to the AI model with a prompt to check if the comment is spam.
+4. Receives an AI model response per context object that determines whether the comment is spam or not and specifies the reasoning for the decision.
+5. If the comment is marked as spam, the task's update script removes the comment from the `Comments` array in the document.
+
+After running the task, its functionality is demonstrated by adding to the `Posts` collection a blog post that includes a spammy comment. Adding the post triggers the task, which will scan the post's comments and remove the one that contains spam.
+
+```csharp
+// Define a connection string to OpenAI
+var connectionString = new AiConnectionString
+{
+ // Connection string name & identifier
+ Name = "open-ai-cs",
+
+ ModelType = AiModelType.Chat,
+
+ // OpenAI connection settings
+
+ OpenAiSettings = new OpenAiSettings(
+ apiKey: "your-api-key",
+ endpoint: "https://api.openai.com/v1",
+ // LLM model for text generation
+ model: "gpt-4.1")
+};
+
+// Deploy the connection string to the server
+var operation = new PutConnectionStringOperation(connectionString);
+var putConnectionStringResult = store.Maintenance.Send(operation);
+
+// Define a GenAI task configuration
+GenAiConfiguration config = new GenAiConfiguration
+{
+ // Task name
+ Name = "spam-filter",
+
+ // Unique user-defined task identifier
+ Identifier = "spam-filter",
+
+ // Connection string to AI model
+ ConnectionStringName = "open-ai-cs",
+
+ // Task is enabled
+ Disabled = false,
+
+ // Collection associated with the task
+ Collection = "Posts",
+
+ // Context generation script - format for objects to be sent to the AI model
+ GenAiTransformation = new GenAiTransformation
+ {
+ Script = @"
+ for(const comment of this.Comments)
+ {
+ ai.genContext({Text: comment.Text, Author: comment.Author, Id: comment.Id});}"
+ },
+
+ // AI model Prompt - the instructions sent to the AI model
+ Prompt = @"
+ Check if the following blog post comment is spam or not.
+ A spam comment typically includes irrelevant or promotional content,
+ excessive links, misleading information, or is written with the intent
+ to manipulate search engines or advertise products/services.
+ Consider the language, intent, and relevance of the comment for
+ the blog post content.",
+
+ // Sample object - the layout for the AI model's response
+ SampleObject = JsonConvert.SerializeObject(
+ new
+ {
+ Blocked = true,
+ Reason = "Concise reason for why this comment was marked as spam or ham"
+ }),
+
+ // Update script - specifies what to do with AI model replies.
+ // Use $input to access the context object that was sent to the AI model.
+ // Use $output` to access the results object returned from the AI model.
+ // Use `this` to access and modify the currently processed document.
+ UpdateScript = @"
+ // Find the comment
+ const idx = this.Comments.findIndex(c => c.Id == $input.Id);
+ // Was detected as spam
+ if($output.Blocked)
+ {
+ // Remove this comment
+ this.Comments.splice(idx, 1);
+ }",
+
+ // Max concurrent connections to AI model
+ MaxConcurrency = 4
+};
+
+// Run the task
+var GenAiOperation = new AddGenAiOperation(config);
+var addAiIntegrationTaskResult = store.Maintenance.Send(GenAiOperation);
+
+// Add a blog post document that includes a spam comment to the Posts collection.
+// Adding the post will trigger the GenAI task to process it.
+using (var session = store.OpenSession())
+{
+ var post = new
+ {
+ Name = "first post",
+ Body = "This is my first post",
+ Comments = new[]
+ {
+ new
+ {
+ Id = "comment/1",
+ Text = "This article really helped me understand how indexes work in RavenDB. Great write-up!",
+ Author = "John"
+ },
+ new
+ {
+ Id = "comment/2",
+ Text = "Learn how to make $5000/month from home! Visit click4cash.biz.example now!!!",
+ Author = "shady_marketer"
+ },
+ new
+ {
+ Id = "comment/3",
+ Text = "I tried this approach with IO_Uring in the past, but I run into problems " +
+ "with security around the IO systems and the CISO didn't let us deploy that to " +
+ "production. It is more mature at this point?",
+ Author = "dave"
+ }
+ }
+ };
+
+ session.Store(post, "posts/1");
+ session.Advanced.GetMetadataFor(post)["@collection"] = "Posts";
+ session.SaveChanges();
+}
+```
+
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/create-gen-ai-task_studio.mdx b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/create-gen-ai-task_studio.mdx
new file mode 100644
index 0000000000..20d949242a
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/create-gen-ai-task/create-gen-ai-task_studio.mdx
@@ -0,0 +1,383 @@
+---
+title: "Create GenAI Task: Studio"
+hide_table_of_contents: true
+sidebar_label: Studio
+sidebar_position: 2
+---
+
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+# Create GenAI Task: Studio
+
+
+* In this article:
+ * [The GenAI Task wizard](../../../ai-integration/gen-ai-integration/create-gen-ai-task/create-gen-ai-task_studio#the-genai-task-wizard)
+ * [Add a GenAI Task](../../../ai-integration/gen-ai-integration/create-gen-ai-task/create-gen-ai-task_studio#add-a-genai-task)
+ * [Configure basic settings](../../../ai-integration/gen-ai-integration/create-gen-ai-task/create-gen-ai-task_studio#configure-basic-settings)
+ * [Generate context objects](../../../ai-integration/gen-ai-integration/create-gen-ai-task/create-gen-ai-task_studio#generate-context-objects)
+ * [Define Prompt and JSON schema](../../../ai-integration/gen-ai-integration/create-gen-ai-task/create-gen-ai-task_studio#define-prompt-and-json-schema)
+ * [Provide update script](../../../ai-integration/gen-ai-integration/create-gen-ai-task/create-gen-ai-task_studio#provide-update-script)
+ * [Review configuration and Save task](../../../ai-integration/gen-ai-integration/create-gen-ai-task/create-gen-ai-task_studio#review-configuration-and-save-task)
+
+
+
+
+
+## The GenAI Task wizard
+Studio's [AI Tasks](../../../ai-integration/ai-tasks-list-view) view includes a GenAI **wizard**.
+Using this wizard, you can easily **create and configure** your task, as well as **test each step
+of its creation** in a dedicated "playground".
+We will go through the task creation and testing sequence below, using the wizard.
+
+### Sample data:
+While demonstrating the creation and testing of a GenAI task, we will use the following
+sample document, illustrating a blog post with an array of comments, of which one is spam.
+We will use our GenAI task to go through the comments and identify spam entries so we can
+remove them.
+To use this sample through this guide, simply create a document named `posts/1` with the
+following content.
+
+
+```json
+{
+ "Name": "first post",
+ "Body": "This is my first post",
+ "Comments": [
+ {
+ "Id": "comment/1",
+ "Text": "This article really helped me understand how indexes work in RavenDB. Great write-up!",
+ "Author": "John"
+ },
+ {
+ "Id": "comment/2",
+ "Text": "Learn how to make $5000/month from home! Visit click4cash.biz.example now!!!",
+ "Author": "shady_marketer"
+ },
+ {
+ "Id": "comment/3",
+ "Text": "I tried this approach with IO_Uring in the past, but I run into problems with security around the IO systems and the CISO didn't let us deploy that to production. It is more mature at this point?",
+ "Author": "dave"
+ }
+ ],
+ "@metadata": {
+ "@collection": "Posts"
+ }
+}
+```
+
+
+
+
+## Add a GenAI Task
+To add a new GenAI task, open: **AI Hub** > **AI Tasks** > **Add AI Task** > **GenAI**
+
+
+
+1. **AI Hub**
+ Click to open the [AI Hub view](../../../ai-integration/ai-tasks-list-view).
+ Use this view to handle AI connection strings and tasks, and to view task statistics.
+2. **AI Tasks**
+ Click to open the AI Tasks view.
+ Use this view to list, configure, or remove AI tasks.
+3. **Add AI Task**
+ Click to add an AI task.
+ 
+ Click the GenAI option to open a wizard that will guide you through the creation and testing of your GenAI task.
+ The steps of this wizard are explained below, starting with basic GenAI task settings.
+
+
+
+## Configure basic settings
+
+
+
+1. **Task name**
+ Give your task a meaningful name.
+
+2. **Unique user-defined task identifier**
+ Give your task a unique identifier.
+ * Use only lowercase letters, numbers, and hyphens.
+ * You can provide the identifier yourself, or click **Regenerate** to create it automatically.
+ * When you complete and save your task and it starts running, it will add a metadata property to documents it processes, named after the identifier you define here.
+ The task will use this property to keep track of document parts it had already processed.
+ See an example [here](../../../ai-integration/gen-ai-integration/gen-ai-overview#gen-ai-metadata).
+
+3. **Task state**
+ Use this switch to enable or disable the task.
+
+4. **Set responsible node**
+ Toggle ON to choose the cluster node that will be responsible for this task.
+ Toggle OFF for the cluster to pick a responsible node for you.
+
+5. **Connection string**
+ The Gen AI task will use an AI model to process your data.
+ It can be a local AI model like Ollama, or an external model like OpenAI.
+ Use this bar to Select or Create the connection string that the GenAI task
+ will use to connect with the AI model.
+ * You can create the connection string either here or in the dedicated
+ [AI Connection Strings](../../../ai-integration/connection-strings/connection-strings-overview) view.
+ * Here is an example for a connection string to a local [Ollama](../../../ai-integration/connection-strings/ollama)
+ AI model capable of filtering spam entries from a blog.
+
+ 
+
+6. **Steps completed**
+ You can use this interactive board as you advance through the wizard to see which steps you completed and what is still to define. Click a listed configuration option to modify its settings.
+
+
+
+## Generate context objects
+
+
+
+1. **Source collection**
+ Select the collection whose documents this GenAI task will monitor and process.
+ E.g., `Posts`
+
+2. **Context generation script**
+ Provide a JavaScript, that your GenAI task will run over each document it retrieves
+ from the selected collection.
+ The purpose of this script is to form a `Context object` that contains data extracted from the document,
+ that the AI model will be able to process effectively.
+ E.g.,
+
+ ```javascript
+ // Go through all the comments that were left for this blog
+ for(const comment of this.Comments)
+ {
+ // Use the `ai.genContext` method to generate a context object for each comment,
+ // that includes the comment text, author, and id.
+ ai.genContext({Text: comment.Text, Author: comment.Author, Id: comment.Id});
+ }
+ ```
+
+
+3. **Playground**
+ Each of the steps from now on is equipped with its own playground, allowing you
+ to test what actually happens when you apply your configuration.
+
+ The playground is a secluded environment, using it will **not** modify your documents.
+
+ * **Collapse/Expand**
+ Toggle to hide or show the playground area.
+ * **Edit mode**
+ * Toggle OFF to use the selected document as the source for the generated context.
+ * Toggle ON to edit the document freely before running the test.
+ * **Select a document from the source collection**
+ Select a document to test your context generation script on.
+ * To use the same sample document we're using to demonstrate the process,
+ add [posts/1](../../../ai-integration/gen-ai-integration/create-gen-ai-task/create-gen-ai-task_studio#sample-data) and select it here.
+ * Or if you prefer it, click `enter a document manually` and enter the sample document content yourself.
+ * To run the test, click the **Test context** button.
+ If all works well, you will see a list of context objects created by your script, one for each comment.
+
+ 
+
+4. **Controls**
+ * **Cancel**
+ Click to cancel any changes made in the task.
+ * **Back**
+ Click to return to the previous step, [Configure basic settings](../../../ai-integration/gen-ai-integration/create-gen-ai-task/create-gen-ai-task_studio#configure-basic-settings).
+ * **Test Context**
+ Click to test your context generation script on the document selected/entered in the playground area.
+ * You do not have to use the playground; you'll be able to define and save your task without testing
+ it first.
+ * However, running the test here will allow you to use the generated result set in the playground of
+ the next wizard step.
+ * **Next**
+ Click to advance to the next step, [Define prompt & JSON schema](../../../ai-integration/gen-ai-integration/create-gen-ai-task/create-gen-ai-task_studio#define-prompt-and-json-schema).
+
+
+
+## Define Prompt and JSON schema
+
+* The GenAI task will send the AI model each context object (configured in the previous step)
+ on its own connection, along with the prompt and JSON schema you provide in this view.
+* The context provides the data for the model to process.
+ The prompt determines what the model should do with the data.
+ The JSON schema formats the returned results, so the GenAI task can use them effectively.
+
+
+
+1. **Prompt**
+ These are the instructions for the AI model.
+ For our spam filtering GenAI task, we can specify, for example:
+
+
+ ```plain
+ Check if the following blog post comment is spam or not.
+ A spam comment typically includes irrelevant or promotional content,
+ excessive links, misleading information, or is written with the intent to
+ manipulate search engines or advertise products/services.
+ Consider the language, intent, and relevance of the comment for
+ the blog post content.
+ ```
+
+
+2. **JSON schema**
+ The AI model will return a results JSON object for each context object sent to it.
+ The JSON schema we set here sets the layout for this results object.
+ * **Use sample object**
+ * Select this option to provide an object that the AI model will use as an example.
+ The results object will be formatted as the sample object you provide.
+ * Textual fields in the sample object can be written in natural language,
+ guiding the AI model what to write in the results.
+
+ E.g. if you select this option and provide this object:
+
+
+ ```json
+ {
+ "Blocked": true,
+ "Reason": "Concise reason for why this comment was marked as spam or ham"
+ }
+ ```
+
+
+ Then result objects returned by the AI model may look like:
+
+
+ ```json
+ {
+ "Blocked": false,
+ "Reason": "Relevant and genuine"
+ }
+ ```
+
+
+
+ ```json
+ {
+ "Blocked": true,
+ "Reason": "Spam"
+ }
+ ```
+
+
+ * **Provide JSON schema**
+ Instead of a sample object, you can provide a formal JSON schema.
+ Providing a sample object (rather than a formal schema) is normally more convenient.
+ Behind the scenes, RavenDB will send a formal schema in any case, since this is the
+ format that the LLM expects to receive. If you provide a schema RavenDB will send it
+ as is, and if you provide a sample object RavenDB will translate it to a schema for you
+ before sending it to the LLM.
+
+3. **Playground**
+ Use this playground to send the AI model context objects with their prompts and schemas,
+ and see the results returned by the AI model.
+ * **Collapse/Expand**
+ Toggle to hide or show the playground area.
+ * **Edit mode**
+ * Toggle OFF to use the results generated using the playground of the previous step.
+ * Toggle ON to edit the context objects freely before trying out your prompt and schema on them.
+ This option gives you the freedom to test any context objects you like, regardless of the results
+ generated by the playground of the previous step.
+ * To run the test, click the **Test model** button.
+ The GenAI task will send the model each context in its own transaction, accompanied
+ by the prompt and JSON schema defined above.
+ The AI model will process the results and return them in the format set by your schema.
+ E.g. -
+
+ 
+
+4. **Controls**
+ * **Cancel**
+ Click to cancel any changes made in the task.
+ * **Back**
+ Click to return to the previous step, [Generate context objects](../../../ai-integration/gen-ai-integration/create-gen-ai-task/create-gen-ai-task_studio#generate-context-objects).
+ * **Test Context**
+ Click to test the prompt and JSON schema you define above, on the context objects generated from the
+ document you provided.
+ * **Next**
+ Click to advance to the next step, [Provide update script](../../../ai-integration/gen-ai-integration/create-gen-ai-task/create-gen-ai-task_studio#provide-update-script).
+
+
+
+## Provide update script
+
+Now that the AI model returned its output, the Gen AI task needs to know what to do with it.
+The update script set in this step, determines what actions should be taken on the arrival of the results.
+
+
+
+1. **Update script**
+ Provide a JavaScript that processes each results object returned from the AI model and takes needed actions.
+ In our case, as the results determine whether each blog comment is spam or not, the script can react to results indicating that a comment is spam, by removing the comment.
+ In the script, we can use the `$input` variable to access the context object that was sent to the AI model (the sample below uses it to update the document by its ID), the `$output` variable to access the results object returned from the AI model, and `this` to access and modify the currently processed document.
+
+
+ ```javascript
+ // Find the comment
+ const idx = this.Comments.findIndex(c => c.Id == $input.Id);
+ // Was detected as spam
+ if($output.Blocked)
+ {
+ // Remove this comment
+ this.Comments.splice(idx, 1); // remove
+ }
+ ```
+
+
+2. **Playground**
+ Use this playground to verify that your update script does what you want it to do.
+ In the case of our spam filtering task, we can check whether the comment that was
+ detected as spam was removed from the blog post.
+
+ 
+
+ * **Edit mode**
+ * Toggle OFF to use the results generated using the playground of the previous step.
+ * Toggle ON to edit the model output freely before testing your update script on it.
+ This option gives you the freedom to test any content you like, regardless of the results
+ generated by the playground of the previous step.
+
+3. **Controls**
+ * **Cancel**
+ Click to cancel any changes made in the task.
+ * **Back**
+ Click to return to the previous step, [Define Prompt and JSON schema](../../../ai-integration/gen-ai-integration/create-gen-ai-task/create-gen-ai-task_studio#define-prompt-and-json-schema).
+ * **Test Context**
+ Click to test the update script you define above.
+ Note that even though in our case we remove comments from existing documents,
+ the update script can leave the original document unchanged, create new documents,
+ and so on - as you choose.
+ * **Next**
+ Click to advance to the next step, [Review configuration and Save task](../../../ai-integration/gen-ai-integration/create-gen-ai-task/create-gen-ai-task_studio#review-configuration-and-save-task).
+
+
+
+## Review configuration and Save task
+
+Use this final step to review your GenAI task configuration before saving and executing it.
+If your task is enabled, it will start running when you save it.
+
+
+
+1. **Review Configuration**
+ Click a step's **Edit** button to view and modify its current configuration.
+ Click a script/object **Show** button to view its current content.
+
+2. **Reprocess all documents**
+ * Enable this option to have the task reprocess all documents in the source collection.
+
+ Note that documents that were already processed and the hash code in their metadata is identical to the hash code of the current task configuration (meaning the configuration hasn't changed since they were processed) will be skipped even if this option is enabled.
+
+ * Disable this option to have the task process only documents that it had not processed before.
+
+3. **Controls**
+ * **Cancel**
+ Click to cancel any changes made in the task.
+ * **Back**
+ Click to return to the previous step, [Provide update script](../../../ai-integration/gen-ai-integration/create-gen-ai-task/create-gen-ai-task_studio#provide-update-script).
+ * **Save**
+ Click to save your task.
+ If enabled, saving the task will start its execution.
+
+ * Test your task and make sure you understand how it might change your documents before saving.
+ * Take every precaution to protect your data, including ensuring it is backed up.
+
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/gen-ai-overview.mdx b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/gen-ai-overview.mdx
new file mode 100644
index 0000000000..4df728057a
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/gen-ai-overview.mdx
@@ -0,0 +1,288 @@
+---
+title: "GenAI Integration: Overview"
+hide_table_of_contents: true
+sidebar_label: Overview
+sidebar_position: 1
+---
+
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+# GenAI Integration: Overview
+
+
+* **Ongoing GenAI tasks** allow RavenDB to connect and interact with Generative AI models, introducing intelligent, autonomous data processing in production.
+
+* Tasks can be easily defined, tested and deployed using [the client API](../../ai-integration/gen-ai-integration/create-gen-ai-task/create-gen-ai-task_api) or [Studio](../../ai-integration/gen-ai-integration/create-gen-ai-task/create-gen-ai-task_studio).
+
+ While creating a GenAI task via Studio, a smart interactive **test environment** is provided, allowing each phase of the task to be tested in a secluded playground, freely and without changing your data, while at the same time producing a result set that can be tried out by the next phase.
+
+* A task can be built in minutes, e.g. to generate automated responses to frequently asked questions, escalate support tickets, summarize lengthy documents, enhance data security by detecting anomalies, or numerous other applications.
+ See a few additional examples in the [common use cases](../../ai-integration/gen-ai-integration/gen-ai-overview#common-use-cases) section below.
+
+* You can use local and remote AI models, e.g. a local `Ollama llama3.2` service during a development phase that requires speed and no additional costs, and a remote `OpenAI gpt-4o-mini` when you need a live service with advanced capabilities.
+
+* In this article:
+ * [RavenDB GenAI tasks](../../ai-integration/gen-ai-integration/gen-ai-overview#ravendb-genai-tasks)
+ * [The flow](../../ai-integration/gen-ai-integration/gen-ai-overview#the-flow)
+ * [The elements](../../ai-integration/gen-ai-integration/gen-ai-overview#the-elements)
+ * [How to create and run a GenAI task](../../ai-integration/gen-ai-integration/gen-ai-overview#how-to-create-and-run-a-genai-task)
+ * [Runtime](../../ai-integration/gen-ai-integration/gen-ai-overview#runtime)
+ * [Tracking of processed document parts](../../ai-integration/gen-ai-integration/gen-ai-overview#tracking-of-processed-document-parts)
+ * [Licensing](../../ai-integration/gen-ai-integration/gen-ai-overview#licensing)
+ * [Supported services](../../ai-integration/gen-ai-integration/gen-ai-overview#supported-services)
+ * [Common use cases](../../ai-integration/gen-ai-integration/gen-ai-overview#common-use-cases)
+
+
+
+
+
+## RavenDB GenAI tasks
+
+RavenDB offers an integration of generative AI capabilities through user-defined **GenAI tasks**.
+A GenAI task is an ongoing process that continuously monitors a document collection associated with it, and reacts when a document is added or modified by Retrieving the document, Generating "context objects" based on its data, Sending these objects to a generative AI model along with instructions regarding what to do with the data and how to format the reply, and potentially Acting upon the model's response.
+
+### The flow:
+Let's put the stages described above in order.
+
+1. A GenAI task continuously monitors the collection it is associated with.
+2. When a document is added or modified, the task retrieves it.
+3. The task generates context objects based on the source document data.
+ To generate these objects, the task applies a user-defined [context generation script](../../ai-integration/gen-ai-integration/gen-ai-overview#the-elements_context-objects)
+ that runs through the source document and generates context objects based on the document data.
+4. The task sends each context object to a GenAI model for processing.
+ * The task is associated with a [Connection string](../../ai-integration/gen-ai-integration/create-gen-ai-task/create-gen-ai-task_studio#studio_connection-string)
+ that defines how to connect to the AI model.
+ * Each context object is sent via a separate connection to the AI model.
+ (note that the number of concurrent connections to the AI model is configurable via the [MaxConcurrency](../../ai-integration/gen-ai-integration/create-gen-ai-task/create-gen-ai-task_api#genaiconfiguration) setting.)
+ * Each context object is sent along with a user-defined [Prompt](../../ai-integration/gen-ai-integration/gen-ai-overview#the-elements_prompt),
+ that instructs the AI model what to do with the data, and
+ a user-defined [JSON schema](../../ai-integration/gen-ai-integration/gen-ai-overview#the-elements_json-schema)
+ that instructs the AI model how to shape its response.
+5. When the AI model returns its response, a user-defined [Update script](../../ai-integration/gen-ai-integration/gen-ai-overview#the-elements_update-script)
+ is applied to handle the results.
+
+### The elements:
+These are the elements that need to be defined for a GenAI task.
+
+* [Connection string](../../ai-integration/gen-ai-integration/create-gen-ai-task/create-gen-ai-task_studio#studio_connection-string)
+ The connection string defines the connection to the GenAI model.
+
+* **Context generation script**
+ The context generation script goes through the source document,
+ and applies the `ai.genContext` method to create **context objects** based on the source document's data.
+ E.g. -
+
+
+ ```javascript
+ for(const comment of this.Comments) {
+ // Use the \`ai.genContext\` method to generate a context object for each comment.
+ ai.genContext({Text: comment.Text, Author: comment.Author, Id: comment.Id});
+ }
+ ```
+
+
+ * RavenDB will pass the AI model **not** the source document, but the generated context objects.
+ * Producing a series of context objects that share a clear common format can add the communication
+ with the AI model a methodical, reliable aspect that is under our full control.
+ * This is also an important security layer added between the database and the AI model, that
+ you can use to ensure that only data you actually want to share with the AI model is passed on.
+
+* **JSON schema**
+ This is a JSON-based object that defines the layout of the AI model's response.
+ This object can be either an **explicit JSON schema**, or a **sample response object**
+ that RavenDB will turn to a JSON schema for us.
+
+ It is normally easier to provide a sample response object, and let RavenDB create
+ the schema behind the scenes. E.g. -
+
+
+
+ ```json
+ {
+ "Blocked": true,
+ "Reason": "Concise reason for why this comment was marked as spam or ham"
+ }
+ ```
+
+
+
+ ```json
+ {
+ "name": "some-name",
+ "strict": true,
+ "schema": {
+ "type": "object",
+ "properties": {
+ "Blocked": {
+ "type": "boolean"
+ },
+ "Reason": {
+ "type": "string",
+ "description": "Concise reason for why this comment was marked as spam or ham"
+ }
+ },
+ "required": [
+ "Blocked",
+ "Reason"
+ ],
+ "additionalProperties": false
+ }
+ }
+ ```
+
+
+
+* **Prompt**
+ The prompt relays to the AI model what we need it to do.
+ * It can be phrased in natural language.
+ * Since the JSON schema already specifies the response layout, including what fields we'd
+ like the AI model to fill and with what content, the prompt can be used simply to explain
+ what we want the model to do.
+ E.g. -
+
+
+ ```plain
+ Check if the following blog post comment is spam or not.
+ A spam comment typically includes irrelevant or promotional content,
+ excessive links, misleading information, or is written with the intent to
+ manipulate search engines or advertise products/services.
+ Consider the language, intent, and relevance of the comment for
+ the blog post content.
+ ```
+
+
+* **Update Script**
+ The update script is executed when the AI model responds to a context object we've sent it.
+ * The update script can take any action, based on the information included in the model's response.
+ It can, for example, Modify the source document, Create new documents populated by AI-generated text,
+ Remove existing documents, and so on.
+ E.g., the following script removes a comment from a blog post if the AI model has concluded that the comment is spam.
+
+
+ ```javascript
+ const idx = this.Comments.findIndex(c => c.Id == $input.Id);
+ if($output.Blocked)
+ {
+ this.Comments.splice(idx, 1);
+ }
+ ```
+
+
+ * The update script can also be used as an additional security measure, and apply only actions
+ that we trust not to inflict any damage.
+
+### How to create and run a GenAI task:
+
+* You can use [Studio's intuitive wizard](../../ai-integration/gen-ai-integration/create-gen-ai-task/create-gen-ai-task_studio#add-a-genai-task)
+ to create GenAI tasks. The wizard will guide you through the task creation phases,
+ exemplify where needed, and provide you with convenient, interactive, secluded "playgrounds"
+ for free interactive experimenting.
+* Or, you can create GenAI tasks using the [Client API](../../ai-integration/gen-ai-integration/create-gen-ai-task/create-gen-ai-task_api).
+
+
+
+## Runtime
+
+Once you complete the configuration and save the task, it will start running (if enabled).
+The task will monitor the collection associated with it, and process documents as they are
+added or modified.
+
+### Tracking of processed document parts:
+
+* After creating a [context object](../../ai-integration/gen-ai-integration/create-gen-ai-task/create-gen-ai-task_studio#generate-context-objects)
+ for a document part and processing it, the GenAI task will create a hash code and log it in the document's metadata, under a property named after the user-defined task identifier.
+
+ The hash code is computed based on these elements:
+ * The context object
+ * The prompt
+ * The GenAI provider and model (e.g. OpenAI gpt-4o-mini)
+ * The JSON schema
+ * The update script
+
+* If the task is requested to process this document part again, it will compute a new hash code based on these elements, and compare it with the existing hash, logged in the document metadata.
+ * If the new hash differs from the existing one, it will indicate that the content and/or the configuration changed, and the task will reprocess this document part.
+ * If the new hash is identical to the existing one, the task will conclude that the context object was already processed with the exact same content and task configuration, and skip reprocessing it.
+
+ **Tracking processed document parts**:
+ 
+
+ **Hash codes in document metadata**:
+ 
+
+ 1. **Identifier**
+ This is the user-defined task identifier (defined as part of the configuration).
+ 2. **Hash codes**
+ These hash codes were created after processing the document.
+ The codes were computed per comment, based on the comment's content and the current task configuration.
+ When the document is processed again, the task will generate a new hash code for each comment. If the comment or the task configuration has changed, the new hash will differ from the existing one and trigger reprocessing. If none of them changed, the identical hash will indicate that no reprocessing is needed.
+
+
+
+## Licensing
+
+For RavenDB to support the GenAI Integration feature, you need a `RavenDB AI` license type.
+A `Developer` license will also enable the feature for experimentation and development.
+
+
+
+
+
+## Supported services
+
+Supported services include:
+
+* `OpenAI` and `OpenAI-compatible` services
+* `Ollama`
+
+
+
+## Common use cases
+
+GenAI tasks can be used to address numerous scenarios through intelligent content processing,
+here are a few key use case categories.
+
+#### Data enrichment & enhancement use cases
+* **Document summarization**
+ Generate concise summaries of lengthy reports, articles, or legal documents.
+* **Data extraction**
+ Extract key details like dates, names, amounts, or entities from unstructured text.
+* **Content translation**
+ Automatically translate documents or user-generated content.
+
+#### Smart automation & workflows use cases
+* **Support ticket routing**
+ Analyze incoming tickets and automatically assign priority levels or route to appropriate teams.
+* **Compliance checking**
+ Scan documents for regulatory compliance issues or policy violations.
+* **Data quality improvement**
+ Standardize formats, correct inconsistencies, or enrich incomplete records.
+
+#### Enhanced search & discovery use cases
+* **Intelligent tagging**
+ Generate relevant keywords and metadata for better document searchability.
+* **Content recommendations**
+ Suggest related articles, products, or resources based on document analysis.
+* **Knowledge extraction**
+ Build searchable knowledge bases from unstructured document collections.
+
+#### Business intelligence & insights use cases
+* **Trend detection**
+ Identify patterns and emerging themes in customer communications or market data.
+* **Competitive analysis**
+ Monitor and analyze competitor mentions, pricing, or product information.
+* **Risk assessment**
+ Flag potentially problematic contracts, transactions, or communications.
+
+#### Content analysis & moderation use cases
+* **Content categorization**
+ Automatically tag and organize articles, documents, or media files.
+* **Spam and content filtering**
+ Automatically detect and flag spam, offensive, or inappropriate comments, reviews, or posts.
+* **Sentiment analysis**
+ Classify customer feedback, support tickets, or social media mentions by emotional tone.
+
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/gen-ai-security-concerns.mdx b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/gen-ai-security-concerns.mdx
new file mode 100644
index 0000000000..b50811b281
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/gen-ai-security-concerns.mdx
@@ -0,0 +1,76 @@
+---
+title: "GenAI Integration: Security Concerns"
+hide_table_of_contents: true
+sidebar_label: Security Concerns
+sidebar_position: 5
+---
+
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+# GenAI Integration: Security Concerns
+
+
+This page addresses concerns that potential GenAI tasks' users may have,
+regarding the safety of data sent to an AI model through the task and the
+security of the database while running such tasks.
+
+* In this article:
+ * [Security measures](../../ai-integration/gen-ai-integration/gen-ai-security-concerns#security-measures)
+
+
+## Security measures
+
+Our approach toward data safety while using RavenDB AI tasks, is that we need
+to take care of security on our end, rather than expect the AI model to protect
+our data.
+
+You can take these security measures:
+
+* **Use a local model when possible**
+ Use a local AI model like Ollama whenever you don't have to transit your data
+ to an external model, to keep the data, as much as possible, within the safe
+ boundaries of your own network.
+
+* **Pick the right model**
+ RavenDB does not dictate what model to use, giving you full freedom to pick
+ the services that you want to connect.
+ Choose wisely the AI model you connect, some seem to be in better hands than others.
+
+* **Send only the data you want to send**
+ You are in full control of the data that is sent from your server to the AI model.
+ Your choices while defining the task, including the collection you associate the
+ task with and the [context generation script](../../ai-integration/gen-ai-integration/create-gen-ai-task/create-gen-ai-task_studio#generate-context-objects)
+ you define, determine the only data that will be exposed to the AI model.
+ Take your time, when preparing this script, to make sure you send only the
+ data you actually want to send.
+
+* **Use the playgrounds**
+ While defining your AI task, take the time and use Studio's
+ [playgrounds](../../ai-integration/gen-ai-integration/create-gen-ai-task/create-gen-ai-task_studio#generate-context-objects-playground)
+ to double-check what is actually sent.
+ There are separate playgrounds for the different stages, using them is
+ really enjoyable, and you can test your configuration on various documents
+ and see exactly what you send and what you receive.
+
+* **Use a secure server**
+ The AI model is **not** given entry to your database. The data that you send it
+ voluntarily is all it gets. However, as always, if you care about your privacy
+ and safety, you'd want to use a [secure server](../../start/installation/setup-wizard#select-setup-mode).
+ This will assure that you have full control over visitors to your database and
+ their permissions.
+
+* **Use your update script wisely**
+ When considering threats to our data we often focus on external risks,
+ but many times it is us that endanger it the most.
+ The [update script](../../ai-integration/gen-ai-integration/create-gen-ai-task/create-gen-ai-task_studio#provide-update-script)
+ is the JavaScript that the GenAI task runs after receiving a reply from
+ the AI model. Here too, take your time to check this powerful script
+ using the built in Studio [playground](../../ai-integration/gen-ai-integration/create-gen-ai-task/create-gen-ai-task_studio#provide-update-script-playground).
+
+
+
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/gen-ai_start.mdx b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/gen-ai_start.mdx
new file mode 100644
index 0000000000..1c0e070e19
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/gen-ai_start.mdx
@@ -0,0 +1,60 @@
+---
+title: "GenAI tasks: Start"
+hide_table_of_contents: true
+sidebar_label: Start
+sidebar_position: 0
+---
+
+import Admonition from '@theme/Admonition';
+
+import CardWithImage from "@site/src/components/Common/CardWithImage";
+import CardWithImageHorizontal from "@site/src/components/Common/CardWithImageHorizontal";
+import ColGrid from "@site/src/components/ColGrid";
+import genAiStartOvImage from "./assets/gen-ai_start_ov-image.png";
+import genAiStartApiImage from "./assets/gen-ai_start_api-image.png";
+import genAiStartStudioImage from "./assets/gen-ai_start_studio-image.png";
+import unlockGenAiPotentialArticleImage from "./assets/unlock-genai-potential-article-image.webp";
+import articleGenAiImage from "./assets/article-cover-genai.webp";
+
+import ayendeBlogImage from "@site/static/img/from-ayende-com.webp";
+import webinarThumbnailPlaceholder from "@site/static/img/webinar.webp";
+
+# GenAI tasks
+
+### Build intelligent workflows with GenAI tasks.
+GenAI tasks are [ongoing operations](../../studio/database/tasks/ongoing-tasks/general-info) that continuously monitor specified collections and process documents as they are added or modified.
+- Similar to [ETL tasks](../../studio/database/tasks/ongoing-tasks/ravendb-etl-task), a GenAI task extracts content from documents. But instead of sending the content to another database, the task sends it to an AI model (like OpenAI) along with a guiding **prompt** and a **JSON schema** that defines the layout for the model's response.
+- When the LLM responds, the GenAI task can use its response to, for example, update the source document with LLM-generated content, or create new documents in the database.
+- GenAI tasks can infuse intelligence into a wide variety of content handling scenarios.
+ E.g., they can enrich documents with AI-generated summaries or classifications, translate text into different languages, or generate new content based on existing data.
+- You can easily create GenAI tasks using Studio or the client API.
+ When created via Studio, each step of their creation can be easily tested and validated before deployment.
+
+### Use cases
+GenAI tasks can infuse intelligence into a wide variety of content handling scenarios. Here are some of the categories in which they can help.
+* Data enrichment & enhancement
+* Smart automation & workflows
+* Enhanced search & discovery
+* Business intelligence & insights
+* Content analysis & moderation
+
+### Technical documentation
+Learn how to create and manage tasks that intelligently process your data and transform your content.
+
+
+
+
+
+
+#### Learn more: In-depth GenAI tasks articles
+
+
+
+
+
+
+### Related lives & Videos
+Learn how GenAI tasks help create reliable and effective AI-powered workflows.
+
+
+
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/modify-gen-ai-task/_category_.json b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/modify-gen-ai-task/_category_.json
new file mode 100644
index 0000000000..23fad24d6f
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/modify-gen-ai-task/_category_.json
@@ -0,0 +1,4 @@
+{
+ "position": 3,
+ "label": "Modify GenAI Task"
+}
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/modify-gen-ai-task/assets/gen-ai_review-task-configuration.png b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/modify-gen-ai-task/assets/gen-ai_review-task-configuration.png
new file mode 100644
index 0000000000..9e0a0c7866
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/modify-gen-ai-task/assets/gen-ai_review-task-configuration.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/modify-gen-ai-task/assets/gen-ai_task-view_edit.png b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/modify-gen-ai-task/assets/gen-ai_task-view_edit.png
new file mode 100644
index 0000000000..ad2c6bd83b
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/modify-gen-ai-task/assets/gen-ai_task-view_edit.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/modify-gen-ai-task/assets/snagit/gen-ai_review-task-configuration.snagx b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/modify-gen-ai-task/assets/snagit/gen-ai_review-task-configuration.snagx
new file mode 100644
index 0000000000..867fd18bcc
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/modify-gen-ai-task/assets/snagit/gen-ai_review-task-configuration.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/modify-gen-ai-task/assets/snagit/gen-ai_task-view_edit.snagx b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/modify-gen-ai-task/assets/snagit/gen-ai_task-view_edit.snagx
new file mode 100644
index 0000000000..4f8e13bfb1
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/modify-gen-ai-task/assets/snagit/gen-ai_task-view_edit.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/modify-gen-ai-task/modify-gen-ai-task_api.mdx b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/modify-gen-ai-task/modify-gen-ai-task_api.mdx
new file mode 100644
index 0000000000..100c502950
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/modify-gen-ai-task/modify-gen-ai-task_api.mdx
@@ -0,0 +1,166 @@
+---
+title: "Modify GenAI Task: API"
+hide_table_of_contents: true
+sidebar_label: "Client API"
+sidebar_position: 1
+---
+
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+# Modify GenAI Task: API
+
+
+
+* To modify an existing GenAI task, register a modified task configuration object with the server using the existing `TaskID`, via the `UpdateGenAiOperation` store operation.
+* Note that this `TaskID` is **not** the [user-defined task identifier](../../../ai-integration/gen-ai-integration/create-gen-ai-task/create-gen-ai-task_studio#configure-basic-settings) that we define as part of the task configuration, but an identifier that RavenDB uses internally to manage the task (same as it does with other ongoing tasks like ETL tasks, backup tasks, and others).
+ * The **user-defined task identifier** is a `string` variable that is mainly used as a property name for a list of hashes that identify [processed document parts](../../../ai-integration/gen-ai-integration/gen-ai-overview#tracking-of-processed-document-parts) in the document metadata.
+ * The `TaskID` is a `long` variable that is used by RavenDB to identify and manage the task.
+ See the examples below to learn how to extract the `TaskID` and use it to register the modified task configuration.
+
+* In this article:
+ * [Modify task configuration](../../../ai-integration/gen-ai-integration/modify-gen-ai-task/modify-gen-ai-task_api#modify-task-configuration)
+ * [Syntax](../../../ai-integration/gen-ai-integration/modify-gen-ai-task/modify-gen-ai-task_api#syntax)
+
+
+
+
+
+## Modify task configuration
+
+To modify the configuration of an existing GenAI task:
+* Retrieve the ongoing task information using `GetOngoingTaskInfoOperation`, passing it:
+ * The existing task's user-defined task identifier (a `string` variable).
+ * The task type (`OngoingTaskType.GenAi` for GenAI tasks).
+* Extract the `TaskID` (a `long` variable) from the returned `OngoingTaskInfo` object.
+* You can -
+ * Either **modify the existing task configuration** and change only selected sections of it
+ (this approach is often easier as you can change only relevant details),
+ * Or **create a new configuration object** and populate it from scratch with new settings for your task
+ (this approach may be preferable if you want to redefine the whole configuration).
+* Register the new or modified configuration with the server using `UpdateGenAiOperation`, passing it:
+ * The extracted `TaskID`.
+ * The configuration object.
+
+### Examples:
+The below examples modify the spam-filter demonstrated in the [create GenAI task](../../../ai-integration/gen-ai-integration/create-gen-ai-task/create-gen-ai-task_api#full-example) article, which removes spam comments from documents in the `Posts` collection.
+ * The first example, **modify-selected-configuration-details**, demonstrates how to retrieve the existing configuration, modify selected sections of it, and register it with the server again.
+ * The second example, **create-configuration-from-scratch**, demonstrates how to create a new configuration object, populate it with all necessary configuration details, and register it with the server again.
+ * Both examples leave all details as configured in the original example except for the task **name**, the user-defined task **identifier**, and the **update script** - which doesn't remove spammy comments but instead adds a `Warning` property to each comment suspected as spam and explains in it why the comment might be spam.
+
+
+
+
+```csharp
+// Provide the existing user-defined task identifier to retrieve the ongoing task info
+var getTaskInfo = new GetOngoingTaskInfoOperation("spam-filter", OngoingTaskType.GenAi);
+var ongoingTask = store.Maintenance.Send(getTaskInfo); // returns existing task info
+
+// Extract the internal TaskID that RavenDB uses to manage the task
+long TaskId = ongoingTask.TaskId;
+
+// Use the existing task configuration as a base for modifications
+var modifiedConfig = ((GenAi)ongoingTask).Configuration;
+
+// Modify selected details
+modifiedConfig.Identifier = "spam-warning-filter";
+modifiedConfig.Name = "spam-warning-filter";
+modifiedConfig.UpdateScript = @"
+ // Find the comment
+ const idx = this.Comments.findIndex(c => c.Id == $input.Id);
+ // Was detected as spam
+ if($output.Blocked)
+ {
+ // Add a warning to the comment instead of removing it
+ this.Comments[idx].Warning = 'This comment may be spam: ' + $output.Reason;
+ }";
+
+// Update the GenAI task using the existing TaskID and the modified configuration
+store.Maintenance.Send(new UpdateGenAiOperation(TaskId, modifiedConfig));
+```
+
+
+
+```csharp
+// Provide the existing user-defined task identifier to retrieve the ongoing task info
+var getTaskInfo = new GetOngoingTaskInfoOperation("spam-filter", OngoingTaskType.GenAi);
+var ongoingTask = store.Maintenance.Send(getTaskInfo);
+
+// Extract the internal TaskID that RavenDB uses to manage the task
+long TaskId = ongoingTask.TaskId;
+
+// Create and populate a new task configuration object
+GenAiConfiguration newConfig = new GenAiConfiguration
+{
+ // New user-defined task identifier
+ Identifier = "spam-warning-filter",
+
+ // New task name
+ Name = "spam-warning-filter",
+
+ // Connection string to AI model
+ ConnectionStringName = "open-ai-cs",
+
+ // Task is enabled
+ Disabled = false,
+
+ // Collection associated with the task
+ Collection = "Posts",
+
+ // Context generation script - format for objects to be sent to the AI model
+ GenAiTransformation = new GenAiTransformation
+ {
+ Script = @"
+ for(const comment of this.Comments)
+ {
+ ai.genContext({Text: comment.Text, Author: comment.Author, Id: comment.Id});}"
+ },
+
+ // AI model Prompt - the instructions sent to the AI model
+ Prompt = "Check if the following blog post comment is spam or not",
+
+ // Sample object - the layout for the AI model's response
+ SampleObject = @"
+ {
+ ""Blocked"": true,
+ ""Reason"": ""Concise reason for why this comment was marked as spam or ham""
+ }",
+
+ // New Update script - specifies what to do with AI model replies.
+ UpdateScript = @"
+ // Find the comment
+ const idx = this.Comments.findIndex(c => c.Id == $input.Id);
+ // Was detected as spam
+ if($output.Blocked)
+ {
+ // Add a warning to the comment instead of removing it
+ this.Comments[idx].Warning = 'This comment may be spam: ' + $output.Reason;
+ }",
+
+ // Max concurrent connections to AI model
+ MaxConcurrency = 4
+};
+
+// Update the GenAI task using the existing TaskID and the new configuration
+store.Maintenance.Send(new UpdateGenAiOperation(TaskId, newConfig));
+```
+
+
+
+### Syntax:
+
+* `UpdateGenAiOperation` definition:
+ ```csharp
+ public class UpdateGenAiOperation(long taskId, GenAiConfiguration configuration, StartingPointChangeVector startingPoint = null) : IMaintenanceOperation
+ ```
+
+ | Parameters | Type | Description |
+ | ------------- | ------------- | ----- |
+ | `taskId` | `long` | The internal RavenDB `TaskID` of the task to update. |
+ | `configuration` | `GenAiConfiguration` | The new or modified configuration for the GenAI task. |
+ | `startingPoint` | `StartingPointChangeVector` | Optional starting point for the update operation. |
+
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/modify-gen-ai-task/modify-gen-ai-task_studio.mdx b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/modify-gen-ai-task/modify-gen-ai-task_studio.mdx
new file mode 100644
index 0000000000..399293cda1
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/modify-gen-ai-task/modify-gen-ai-task_studio.mdx
@@ -0,0 +1,44 @@
+---
+title: "Modify GenAI Task: Studio"
+hide_table_of_contents: true
+sidebar_label: Studio
+sidebar_position: 2
+---
+
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+# Modify GenAI Task: Studio
+
+Saved tasks are listed in the AI Tasks view.
+Selecting a task from the list will take you to the task's [Review task configuration](../../../ai-integration/gen-ai-integration/create-gen-ai-task/create-gen-ai-task_studio#review-configuration-and-save-task) page, which provides an overall view of the task configuration and allows you to review different sections, edit them, and save the modified configuration when you're done.
+
+
+
+1. **AI Hub**
+ Click to open the [AI Hub view](../../../ai-integration/ai-tasks-list-view).
+2. **AI Tasks**
+ Click to open the AI Tasks view.
+3. **Tasks list**
+ Pick the task that you want to modify by clicking its name or edit (pencil) icon.
+ This will take you to the task's **Review task configuration** page.
+
+
+
+## Review and edit task configuration
+
+
+
+* Use this view to review, edit, and save the task configuration.
+
+* Click **Show** to view the current settings of a configuration section,
+ or **Edit** to modify a configuration section using the same [task creation wizard](../../../ai-integration/gen-ai-integration/create-gen-ai-task/create-gen-ai-task_studio) used to initially define the task.
+
+ * If the task is enabled, your modifications will take effect as soon as you save the configuration.
+ Test your task and make sure you understand how it might change your documents before saving.
+ * Take every precaution to protect your data, including ensuring it is backed up.
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/_category_.json b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/_category_.json
new file mode 100644
index 0000000000..fec2fde5f5
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/_category_.json
@@ -0,0 +1,4 @@
+{
+ "position": 4,
+ "label": "Process attachments"
+}
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/gen-ai_attachment-example.png b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/gen-ai_attachment-example.png
new file mode 100644
index 0000000000..7f4dc8455d
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/gen-ai_attachment-example.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/gen-ai_context-generation-script_attachments-list.png b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/gen-ai_context-generation-script_attachments-list.png
new file mode 100644
index 0000000000..6c1d531555
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/gen-ai_context-generation-script_attachments-list.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/gen-ai_context-generation-script_include-attachment.png b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/gen-ai_context-generation-script_include-attachment.png
new file mode 100644
index 0000000000..153dd82fd8
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/gen-ai_context-generation-script_include-attachment.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/gen-ai_context-generation-script_test-context.png b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/gen-ai_context-generation-script_test-context.png
new file mode 100644
index 0000000000..e27e75cfdb
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/gen-ai_context-generation-script_test-context.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/gen-ai_electric-toys-collection-after-processing.png b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/gen-ai_electric-toys-collection-after-processing.png
new file mode 100644
index 0000000000..2b090c30c3
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/gen-ai_electric-toys-collection-after-processing.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/gen-ai_prompt-and-json-schema_include-attachment-analysis.png b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/gen-ai_prompt-and-json-schema_include-attachment-analysis.png
new file mode 100644
index 0000000000..35dcdf812a
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/gen-ai_prompt-and-json-schema_include-attachment-analysis.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/gen-ai_prompt-and-json-schema_test-prompt-and-schema.png b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/gen-ai_prompt-and-json-schema_test-prompt-and-schema.png
new file mode 100644
index 0000000000..eaaed9f547
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/gen-ai_prompt-and-json-schema_test-prompt-and-schema.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/gen-ai_update-script_test-update-script.png b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/gen-ai_update-script_test-update-script.png
new file mode 100644
index 0000000000..c3e08b4112
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/gen-ai_update-script_test-update-script.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/gen-ai_update-script_update-document-with-llm-response.png b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/gen-ai_update-script_update-document-with-llm-response.png
new file mode 100644
index 0000000000..9b59b6417c
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/gen-ai_update-script_update-document-with-llm-response.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/snagit/gen-ai_attachment-example.snagx b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/snagit/gen-ai_attachment-example.snagx
new file mode 100644
index 0000000000..d5c2007211
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/snagit/gen-ai_attachment-example.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/snagit/gen-ai_context-generation-script_attachments-list.snagx b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/snagit/gen-ai_context-generation-script_attachments-list.snagx
new file mode 100644
index 0000000000..0a1fde5ece
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/snagit/gen-ai_context-generation-script_attachments-list.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/snagit/gen-ai_context-generation-script_include-attachment.snagx b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/snagit/gen-ai_context-generation-script_include-attachment.snagx
new file mode 100644
index 0000000000..c5f3b1599e
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/snagit/gen-ai_context-generation-script_include-attachment.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/snagit/gen-ai_context-generation-script_test-context.snagx b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/snagit/gen-ai_context-generation-script_test-context.snagx
new file mode 100644
index 0000000000..79ae86b1d7
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/snagit/gen-ai_context-generation-script_test-context.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/snagit/gen-ai_electric-toys-collection-after-processing.snagx b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/snagit/gen-ai_electric-toys-collection-after-processing.snagx
new file mode 100644
index 0000000000..fb465a1cab
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/snagit/gen-ai_electric-toys-collection-after-processing.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/snagit/gen-ai_prompt-and-json-schema_include-attachment-analysis.snagx b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/snagit/gen-ai_prompt-and-json-schema_include-attachment-analysis.snagx
new file mode 100644
index 0000000000..84d168a4fd
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/snagit/gen-ai_prompt-and-json-schema_include-attachment-analysis.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/snagit/gen-ai_prompt-and-json-schema_test-prompt-and-schema.snagx b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/snagit/gen-ai_prompt-and-json-schema_test-prompt-and-schema.snagx
new file mode 100644
index 0000000000..745b23f086
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/snagit/gen-ai_prompt-and-json-schema_test-prompt-and-schema.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/snagit/gen-ai_update-script_test-update-script.snagx b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/snagit/gen-ai_update-script_test-update-script.snagx
new file mode 100644
index 0000000000..3724462ada
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/snagit/gen-ai_update-script_test-update-script.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/snagit/gen-ai_update-script_update-document-with-llm-response.snagx b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/snagit/gen-ai_update-script_update-document-with-llm-response.snagx
new file mode 100644
index 0000000000..246a678a81
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/assets/snagit/gen-ai_update-script_update-document-with-llm-response.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/processing-attachments_api.mdx b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/processing-attachments_api.mdx
new file mode 100644
index 0000000000..220c127eb4
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/processing-attachments_api.mdx
@@ -0,0 +1,162 @@
+---
+title: "Process attachments: API"
+hide_table_of_contents: true
+sidebar_label: Client API
+sidebar_position: 1
+---
+
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+# Process attachments: API
+
+
+
+* A GenAI task can send the LLM not only documents, but also files attached to the documents.
+
+* Supported file types are:
+ * **Plain text files**
+ Text files are sent to the LLM as is, without any additional encoding.
+ * **Image files: `jpeg`, `png`, `webp`, `gif`**
+ Image files are sent to the LLM in base64-encoded strings.
+ * **PDF files**
+ PDF files are sent to the LLM in base64-encoded strings.
+
+* In this article:
+ * [Sending attachments to the LLM](../../../ai-integration/gen-ai-integration/process-attachments/processing-attachments_api#sending-attachments-to-the-llm)
+
+
+
+
+
+## Sending attachments to the LLM
+
+
+Find a complete example of defining and running a GenAI task that processes attachments in [Processing attachments: Studio](../../../ai-integration/gen-ai-integration/process-attachments/processing-attachments_studio).
+
+
+To send documents to the LLM along with their attachments using the API, define and run your GenAI task just as you would without attachments, with the following differences:
+
+* When [defining the connection string](../../../ai-integration/gen-ai-integration/create-gen-ai-task/create-gen-ai-task_api#defining-a-connection-string) to the LLM, make sure the AI model you're utilizing is capable of processing the files attached to your documents.
+ E.g., use OpenAI `gpt-4.1-mini` to process attached image files.
+
+* When [creating the context object](../../../ai-integration/gen-ai-integration/gen-ai-overview#the-elements) that will be sent to the LLM, include document attachments by specifying them in the context generation script. The LLM will receive and process the attachments along with the main document content.
+
+ * Use the `with` method of the `ai.genContext` object to include document attachments.
+
+ * Replace `` with the type of the attachment you want to include:
+ `withText` - for plain text files
+ `withPng` - for PNG image files
+ `withJpeg` - for JPEG image files
+ `withWebp` - for WEBP image files
+ `withGif` - for GIF image files
+ `withPdf` - for PDF files
+
+ * Pass `with` the attached file using `loadAttachment` with the file name as an argument.
+ E.g., to include a PNG attachment named `electric-circuit.png`, use:
+ ```javascript
+ ai.genContext({ ToyName: this.Name, ToyId: id(this) })
+ .withPng(loadAttachment(`electric-circuit.png`));
+ ```
+
+
+ Additional options include:
+
+ * [Conditional attachment](../../../ai-integration/gen-ai-integration/process-attachments/processing-attachments_studio#conditional-attachment)
+ * [Multiple attachments](../../../ai-integration/gen-ai-integration/process-attachments/processing-attachments_studio#multiple-attachments)
+ * [Embedding base64-encoded images in the context object](../../../ai-integration/gen-ai-integration/process-attachments/processing-attachments_studio#embedding-base64-encoded-images-in-the-context-object)
+ * [Embedding text in the context object](../../../ai-integration/gen-ai-integration/process-attachments/processing-attachments_studio#embedding-text-in-the-context-object)
+
+
+
+
+* When [defining the task Prompt and JSON schema](../../../ai-integration/gen-ai-integration/process-attachments/processing-attachments_studio#set-task-prompt-and-json-schema), make sure to include in the prompt instructions for how the LLM should handle the attachments, and set in the schema fields for any information you expect the LLM to return related to the attachments.
+
+* When [defining the task Update script](../../../ai-integration/gen-ai-integration/process-attachments/processing-attachments_studio#set-task-update-script), make sure to include logic for how to handle the LLM's responses related to the attachments.
+
+## Example
+
+```csharp
+using (var store = new DocumentStore())
+{
+ // Define the connection string to OpenAI
+ var connectionString = new AiConnectionString
+ {
+ // Connection string name & identifier
+ Name = "open-ai-cs",
+
+ // Connection type
+ ModelType = AiModelType.Chat,
+
+ // OpenAI connection settings
+ OpenAiSettings = new OpenAiSettings(
+ apiKey: "your-api-key",
+ endpoint: "https://api.openai.com/v1",
+ model: "gpt-4.1-mini") // Model capable of handling image processing
+ };
+
+ // Deploy the connection string to the server
+ var operation = new PutConnectionStringOperation(connectionString);
+ var putConnectionStringResult = store.Maintenance.Send(operation);
+
+ // Define the GenAI task configuration
+ GenAiConfiguration config = new GenAiConfiguration
+ {
+ // Task name
+ Name = "electric-toy-circuit-description",
+
+ // Unique task identifier
+ Identifier = "electric-toy-circuit-description",
+
+ // Connection string to AI model
+ ConnectionStringName = "open-ai-cs",
+
+ // Task is enabled
+ Disabled = false,
+
+ // Collection associated with the task
+ Collection = "ElectricToys",
+
+ // Context generation script - format for objects to be sent to the AI model
+ // Include document attachments in the context object using `with` methods
+ GenAiTransformation = new GenAiTransformation {
+ Script = @"
+ ai.genContext({ ToyName: this.Name, ToyId: id(this) })
+ .withPng(loadAttachment(`electric-circuit.png`));"
+ },
+
+ // AI model Prompt - the instructions sent to the AI model
+ Prompt = "You get documents from an `ElectricToys` document collection. " +
+ "These are toys for youth that wants to learn simple electronics. " +
+ "Each document includes a toy's ID and name, and an attached " +
+ "image with the scheme of a circuit that operates the toy. " +
+ "Your job is to provide a simple description of up to 20 words " +
+ "for the circuit, that will be added to the toy's document to " +
+ "describe how it is operated.",
+
+ // Sample object - a sample response object to format the AI model's replies by
+ SampleObject = JsonConvert.SerializeObject( new {
+ ToyName = "Toy name as provided by the GenAI task",
+ ToyId = "Toy ID as provided by the GenAI task",
+ CircuitDescription = "LLM's description of the electric circuit"
+ }),
+
+ // Update script - specifies what to do with AI model replies
+ UpdateScript = @"
+ // Embed LLM response in source document
+ this.CircuitDescription = $output.CircuitDescription;",
+
+ // Max concurrent connections to AI model
+ MaxConcurrency = 4
+ };
+
+ // Run the task
+ var GenAiOperation = new AddGenAiOperation(config);
+ var addAiIntegrationTaskResult = store.Maintenance.Send(GenAiOperation);
+}
+
+```
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/processing-attachments_studio.mdx b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/processing-attachments_studio.mdx
new file mode 100644
index 0000000000..2b327464ac
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/gen-ai-integration/process-attachments/processing-attachments_studio.mdx
@@ -0,0 +1,223 @@
+---
+title: "Process attachments: Studio"
+hide_table_of_contents: true
+sidebar_label: Studio
+sidebar_position: 2
+---
+
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+# Process attachments: Studio
+
+
+* When a GenAI task sends to the LLM a document that files are attached to, the task
+ can send the attached files along with the document so the LLM would process them as well.
+ This way you can, for example, make the LLM analyze technical schemes attached to product documents, review reports attached to user profiles, and so on.
+
+* Supported file types are:
+ * Plain text files
+ * Image files: `jpeg`, `png`, `webp`, `gif`
+ * PDF files
+
+* Attached text files are sent to the LLM as plain text.
+ Attached PDF and image files are sent to the LLM in a base64 format.
+
+* Make sure the LLM model you use is capable of handling the attachments you send it.
+ E.g., to process image files you can use OpenAI's `gpt-4.1-mini` model.
+
+* In this article:
+ * [Include attachments in the Context generation script](../../../ai-integration/gen-ai-integration/process-attachments/processing-attachments_studio#include-attachments-in-the-context-generation-script)
+ * [Conditional attachment](../../../ai-integration/gen-ai-integration/process-attachments/processing-attachments_studio#conditional-attachment)
+ * [Multiple attachments](../../../ai-integration/gen-ai-integration/process-attachments/processing-attachments_studio#multiple-attachments)
+ * [Embedding base64-encoded images in the context object](../../../ai-integration/gen-ai-integration/process-attachments/processing-attachments_studio#embedding-base64-encoded-images-in-the-context-object)
+ * [Embedding text in the context object](../../../ai-integration/gen-ai-integration/process-attachments/processing-attachments_studio#embedding-text-in-the-context-object)
+ * [Set task Prompt and JSON schema](../../../ai-integration/gen-ai-integration/process-attachments/processing-attachments_studio#set-task-prompt-and-json-schema)
+ * [Set task Update script](../../../ai-integration/gen-ai-integration/process-attachments/processing-attachments_studio#set-task-update-script)
+
+
+
+
+
+## Include attachments in the Context generation script
+
+
+
+Our example GenAI task sends the LLM documents from the `ElectricToys` collection.
+Each document of this collection has an image file named `electric-circuit.png` attached to it, that illustrates a simple electric circuit that operates the toy. For example:
+
+
+
+We want the LLM to analyze each circuit scheme, and return a short explanation of how the circuit operates.
+We will then embed this explanation in the original document.
+
+
+* Learn about the GenAI task's context generation script and how to set it [here](../../../ai-integration/gen-ai-integration/create-gen-ai-task/create-gen-ai-task_studio#generate-context-objects).
+
+* When creating the context object that will be sent to the LLM, include document attachments by specifying them in the context generation script. The LLM will receive and process the attachments along with the main document content.
+
+* Use the `with` method of the `ai.genContext` object to include document attachments.
+ Replace `` with the type of the attachment you want to include:
+ * `withText` - for plain text files
+ * `withPng` - for PNG image files
+ * `withJpeg` - for JPEG image files
+ * `withWebp` - for WEBP image files
+ * `withGif` - for GIF image files
+ * `withPdf` - for PDF files
+
+* Pass `with` the attached file using `loadAttachment` with the file name as an argument.
+
+* The context generation script below includes in the context object the source document's Name and ID, and uses `withPng` to also include the `electric-circuit.png` file attached to the document.
+ ```javascript
+ ai.genContext({ ToyName: this.Name, ToyId: id(this) })
+ .withPng(loadAttachment("electric-circuit.png"));
+ ```
+
+
+
+1. **Context generation script**
+ Provide a JavaScript that generates the context object sent to the AI model for each document processed by the GenAI task.
+ In the script, include any attachment you want the LLM to process,
+
+2. **Test context**
+ Click to test your context generation script to ensure it works as expected.
+ The test result shows the generated context object, including the attached files.
+
+ 
+
+ * **See attachments**: Click for a list of attachments icluded in the context:
+ 
+
+### Conditional attachment:
+
+When `loadAttachment` fails to load an attachment, it will return `null`.
+You can use this to condition the delivery of an attachment on `loadAttachment`'s succees to load it.
+Including a non-existing attachment in the context object will not generate an error, but the LLM will receive a "not found" message instead of the attachment.
+
+```javascript
+// Verify attachment existance before sending it to the LLM
+const img = loadAttachment("electric-circuit.png");
+if (img != null) {
+ ai.genContext({ ToyName: this.Name, ToyId: id(this) })
+ .withPng(img);
+} else {
+ ai.genContext({ ToyName: this.Name, ToyId: id(this) });
+}
+```
+
+### Multiple attachments:
+
+You can include multiple attachments in the context object.
+E.g., to include both a PNG and a PDF attachment, use:
+
+```javascript
+const img = loadAttachment("electric-circuit.png");
+const pdf = loadAttachment("circuit-diagram.pdf");
+ai.genContext({ ToyName: this.Name, ToyId: id(this) })
+ .withPng(img)
+ .withPdf(pdf);
+```
+
+### Embedding base64-encoded images in the context object:
+
+You can embed base64-encoded images in the context object instead of sending them as separate attachments.
+E.g.,
+
+```javascript
+// Base64-encoded image string
+const starImage = "iVBORw0KGgoAAAANSUhEUgAAACUAAAAkCAYAAAAOwvOmAAAACXBIWXMAABYlAAAWJQFJUiTwAAAAB3RJTUUH6QkOABYvpxl9JgAAA6VJREFUWIXNmL9vG2UYxz/Pe45TJ45BdIlAQgilTdK0YihCsCAW1IHCDv8BAjYGRsQfgMRC5w4MbDAgxIL4IaHCQJe0glYkRaFVQ5M0iY3PPt+9X4a7Om5qn12fCf1Kr3X2Pffc55679/s+PpMkHjG5SSVStEEyocubDJSHxrWnseRPJsFVHEqg9hbOQ3jtWUT8CEAZ1NfmsQTAY+ENKFivwlBRYw+nBJXAJ1BfW0xJ/y8oIaKbT4IHZRxOHt9ZQwWqVQBK+PYqLm52s1gGFv6+gLCxsQpAGa3rL+D7nFkI3/x57MxjQgkfXsZoDYyIbryIP0ooyWhcf568tUAxJPVfx7qFI0DdS+tBSkf4I84NqYNB5/ZZPKCHrJk9uPYpnUrW8xUwton3f6JV/w6/+wlmCZZTBlN2PXOvk0y/QvWx13DHFpA5zOVbhqmXym+h8BLNO1/Q3P+F6WAV50SQpCfwxeynq8AgSkoElVexx89SeeJNnC2BOVAG1bg6j2mT3irf8x31gORVpqhkYAlUTl5Jobwg/O04+B3kKWrIY8kb1FZ2kZ/NHnQTleU7+KmXsYk1M6NLBtWTTWAWM5dCOQyTqJ74lqR8/shgLBvVE01cqUxqBnZgCWYBJqO28CXMvH8kUDKYPRVhpQpYkEKZHbIEZR+CcPMjku0P/xsay9xiJQKm7nnOwe5BPbrkiTYvEG+/m+vcDyuld4i55Q5Q6s88GEr4RHT2Pye+9dZEgExAMM3McivtIaz/NB8IlYIlmIl472vCv94o7FPSFDOnIlzgs7vVf6r3r18mswDwWPBcMZpuwhgz3zu/+moEV/K0tz6eDJMJG8EIR4AKUHiRYU3BKJLo2xSOAQXEuwVxetT+YWjIcCgZKJgEDk6Q7FwYHjeUyUfIJTkBB5vG/V3FA6EG4c7w3n0oVNT4Pj/AA1NnmDndovzUN2mXMUAmmLJ1IM79v5rvU0B9/SVc49KhZQBw4Eo1Kot/A9NpfJaptfkVfut82hRmv1nPsZUzwh1aWnqVW6kkAYU3DwizRB33DJWlOtOLe12g3qDy/DmOnRbl+U+7y0o3wsCpTl6pcqFcAIE2ukljq1FdalNb+QMXVAf2gkaAIcrH32ZuWQS1D+7b3777WX4jqRz5xOufK+ju5VnF7W15L3nv8w7JSSa1br2nxirauXpOUjIwNBdKuq2ktaGOvJKcJCMydUdj/R15Hw+MzX3Q6b6msOzNwISad5HNlv75/gUYA5gJ5sXALwAAAABJRU5ErkJggg==";
+ai.genContext({ ToyName: this.Name, ToyId: id(this) })
+ .withPng(starImage);
+```
+
+### Embedding text in the context object:
+
+You can embed plain text in the context object instead of sending it as a separate attachment.
+E.g.,
+
+```javascript
+const imageDescription = "Simple switch turning the toy on and off";
+ai.genContext({ ToyName: this.Name, ToyId: id(this) })
+ .withText(imageDescription)
+```
+
+or -
+
+```javascript
+ai.genContext({ ToyName: this.Name, ToyId: id(this) })
+ .withText("Simple switch turning the toy on and off")
+```
+
+
+
+## Set task Prompt and JSON schema
+
+* Learn about the GenAI task's Prompt and JSON schema and how to set them [here](../../../ai-integration/gen-ai-integration/create-gen-ai-task/create-gen-ai-task_studio#define-prompt-and-json-schema).
+
+* While defining the task prompt and JSON schema, consider the attachments you include in the context object.
+ Make sure your prompt instructs the LLM to analyze the attached files, and that your Sample response object (or JSON schema) is designed to capture LLM output derived from these files.
+
+* In the examples below, the prompt instructs the LLM to analyze the electric circuit scheme attached to each toy document, and the Sample response object is designed to capture a description of the circuit based on the analysis of the attached image.
+
+ * **Prompt example**:
+ ```text
+ You get documents from an "ElectricToys" document collection.
+ These are toys for youth that wants to learn how to operate simple circuits.
+ Each document includes a toy's ID and name, and an attached image with the scheme of a circuit that operates the toy.
+ Your job is to provide a simple description of up to 20 words for the circuit, that will be added to the toy's document to describe how it is operated.
+ ```
+ * **Sample response object example**:
+ ```json
+ {
+ "ToyName": "Toy name as provided by the GenAI task",
+ "ToyId": "Toy ID as provided by the GenAI task",
+ "CircuitDescription": "LLM's description of the electric circuit"
+ }
+ ```
+
+
+1. **Prompt**
+ Provide a prompt that instructs the AI model how to process each document, including its attachments.
+ In the prompt, specify what information you expect the AI model to derive from the attached files (in this case - a description of the electric circuit depicted by the attached image).
+
+2. **Sample response object / JSON schema**
+ Define a response object or a schema that outlines the structure of the response you expect from the AI model.
+ Ensure the schema includes fields that will capture the AI model's analysis of the attached files (in this case - a field for the circuit description).
+
+3. **Test model**
+ Click to test your prompt and JSON schema to ensure they work as expected.
+ The test result shows a sample response from the AI model, formatted according to your JSON schema, including information about the attached file.
+
+ 
+
+
+
+## Set task Update script
+
+* Learn about the GenAI task's Update script and how to set it [here](../../../ai-integration/gen-ai-integration/create-gen-ai-task/create-gen-ai-task_studio#provide-update-script).
+
+* Provide an update script that processes the AI model's responses, including any response derived from the attached files, and updates the source documents in your database accordingly.
+
+* In the example below, the update script takes the circuit description provided by the LLM, and simply updates the source document's `CircuitDescription` field with it.
+
+ ```javascript
+ this.CircuitDescription = $output.CircuitDescription;
+ ```
+
+
+
+1. **Update script**
+ Provide a JavaScript that processes the results object returned from the AI model and takes needed actions.
+2. **Test update script**
+ Click to test your update script to ensure it works as expected.
+ The test result shows how the currently processed document will be updated based on the AI model's response, including any information derived from the attached files.
+
+ 
+
+ Here is a list of all documents in the ElectricToys collection, after they were sent by the GenAI task to the LLM along with their attached images and updated according to its analysis:
+
+ 
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/ai-integration/generating-embeddings/_category_.json b/versioned_docs/version-7.1/ai-integration/generating-embeddings/_category_.json
new file mode 100644
index 0000000000..1ba1b3a8d8
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/generating-embeddings/_category_.json
@@ -0,0 +1,4 @@
+{
+ "position": 1,
+ "label": "Generating Embeddings"
+}
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/add-ai-task-1.png b/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/add-ai-task-1.png
new file mode 100644
index 0000000000..bedad14b23
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/add-ai-task-1.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/add-ai-task-2.png b/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/add-ai-task-2.png
new file mode 100644
index 0000000000..23b77a447b
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/add-ai-task-2.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/add-ai-task-3.png b/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/add-ai-task-3.png
new file mode 100644
index 0000000000..0b16de4606
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/add-ai-task-3.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/add-ai-task-4-script.png b/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/add-ai-task-4-script.png
new file mode 100644
index 0000000000..67e94d45df
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/add-ai-task-4-script.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/add-ai-task-4.png b/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/add-ai-task-4.png
new file mode 100644
index 0000000000..fa47cfdf5e
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/add-ai-task-4.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/add-ai-task-5.png b/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/add-ai-task-5.png
new file mode 100644
index 0000000000..6a4b828c96
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/add-ai-task-5.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/add-ai-task-6.png b/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/add-ai-task-6.png
new file mode 100644
index 0000000000..42d20a25c5
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/add-ai-task-6.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/ai-search-article-cover.webp b/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/ai-search-article-cover.webp
new file mode 100644
index 0000000000..9696394161
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/ai-search-article-cover.webp differ
diff --git a/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/embeddings-cache-1.png b/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/embeddings-cache-1.png
new file mode 100644
index 0000000000..1c21c4585f
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/embeddings-cache-1.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/embeddings-cache-2.png b/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/embeddings-cache-2.png
new file mode 100644
index 0000000000..8fc4b3af2d
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/embeddings-cache-2.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/embeddings-cache-3.png b/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/embeddings-cache-3.png
new file mode 100644
index 0000000000..b67df995d3
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/embeddings-cache-3.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/embeddings-collection-1.png b/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/embeddings-collection-1.png
new file mode 100644
index 0000000000..eee180fc7f
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/embeddings-collection-1.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/embeddings-collection-2.png b/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/embeddings-collection-2.png
new file mode 100644
index 0000000000..f72721196e
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/embeddings-collection-2.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/embeddings-generation-task-flow.png b/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/embeddings-generation-task-flow.png
new file mode 100644
index 0000000000..c3e65b2f8c
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/embeddings-generation-task-flow.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/embeddings-generation_start_api-image.png b/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/embeddings-generation_start_api-image.png
new file mode 100644
index 0000000000..a43afaf78d
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/embeddings-generation_start_api-image.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/embeddings-generation_start_ov-image.png b/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/embeddings-generation_start_ov-image.png
new file mode 100644
index 0000000000..b90ffb74a8
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/embeddings-generation_start_ov-image.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/embeddings-generation_start_studio-image.png b/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/embeddings-generation_start_studio-image.png
new file mode 100644
index 0000000000..19a72d98dc
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/embeddings-generation_start_studio-image.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/vector-search-flow.png b/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/vector-search-flow.png
new file mode 100644
index 0000000000..e7b09525a8
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/generating-embeddings/assets/vector-search-flow.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/generating-embeddings/content/_embeddings-generation-task-csharp.mdx b/versioned_docs/version-7.1/ai-integration/generating-embeddings/content/_embeddings-generation-task-csharp.mdx
new file mode 100644
index 0000000000..68198c5af1
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/generating-embeddings/content/_embeddings-generation-task-csharp.mdx
@@ -0,0 +1,520 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* In RavenDB, you can define AI tasks to automatically generate embeddings from your document content.
+ These embeddings are then stored in [dedicated collections](../../../ai-integration/generating-embeddings/embedding-collections.mdx) within the database,
+ enabling [Vector search](../../../ai-integration/vector-search/ravendb-as-vector-database.mdx) on your documents.
+
+* This article explains how to configure such a task.
+ It is recommended to first refer to this [Overview](../../../ai-integration/generating-embeddings/overview.mdx#embeddings-generation---overview)
+ to understand the embeddings generation process flow.
+
+* In this article:
+ * [Configuring an embeddings generation task - from the Studio](../../../ai-integration/generating-embeddings/embeddings-generation-task.mdx#configuring-an-embeddings-generation-task---from-the-studio)
+ * [Configuring an embeddings generation task - from the Client API](../../../ai-integration/generating-embeddings/embeddings-generation-task.mdx#configuring-an-embeddings-generation-task---from-the-client-api)
+ * [Define source using PATHS](../../../ai-integration/generating-embeddings/embeddings-generation-task.mdx#configure-an-embeddings-generation-task---define-source-using-paths)
+ * [Define source using SCRIPT](../../../ai-integration/generating-embeddings/embeddings-generation-task.mdx#configure-an-embeddings-generation-task---define-source-using-script)
+ * [Chunking methods and tokens](../../../ai-integration/generating-embeddings/embeddings-generation-task.mdx#chunking-methods-and-tokens)
+ * [Syntax](../../../ai-integration/generating-embeddings/embeddings-generation-task.mdx#syntax)
+
+
+
+## Configuring an embeddings generation task - from the Studio
+
+* **Define the general task settings**:
+
+ 
+
+ 1. **Name**
+ Enter a name for the task.
+ 2. **Identifier**
+ Enter a unique identifier for the task.
+ Each AI task in the database must have a distinct identifier.
+
+ If not specified, or when clicking the "Regenerate" button,
+ RavenDB automatically generates the identifier based on the task name. For example:
+ * If the task name is: _"Generate embeddings from OpenAI"_
+ * The generated identifier will be: _"generate-embeddings-from-openai"_
+
+ Allowed characters: only lowercase letters (a-z), numbers (0-9), and hyphens (-).
+
+ **This identifier is used:**
+ * When querying embeddings generated by the task via a dynamic query.
+ An example is available in [Querying pre-made embeddings](../../../ai-integration/vector-search/vector-search-using-dynamic-query.mdx#querying-pre-made-embeddings-generated-by-tasks).
+ * When indexing the embeddings generated by the task.
+ An example is available in [Indexing pre-made text-embeddings](../../../ai-integration/vector-search/vector-search-using-static-index.mdx#indexing-pre-made-text-embeddings).
+ * In documents in the [Embeddings collection](../../../ai-integration/generating-embeddings/embedding-collections.mdx#the-embeddings-collection),
+ where the task identifier is used to identify the origin of each embedding.
+
+ See how this identifier is used in the [Embeddings collection](../../../ai-integration/generating-embeddings/embedding-collections.mdx#the-embeddings-collection)
+ documents that reference the generated embeddings.
+
+ 3. **Regenerate**
+ Click "Regenerate" to automatically create an identifier based on the task name.
+ 4. **Task state**
+ Enable/Disable the task.
+ 5. **Responsible node**
+ Select a node from the [Database group](../../../studio/database/settings/manage-database-group.mdx) to be the responsible node for this task.
+ 6. **Connection string**
+ Select a previously defined [AI connection string](../../../ai-integration/connection-strings/connection-strings-overview.mdx) or create a new one.
+ 7. **Enable document expiration**
+ This toggle appears only if the [Document expiration feature](../../../studio/database/settings/document-expiration.mdx) is Not enabled in the database.
+ Enabling document expiration ensures that embeddings in the `@embeddings-cache` collection are automatically deleted when they expire.
+ 8. **Save**
+ Click _Save_ to store the task definition or _Cancel_.
+
+* **Define the embeddings source - using PATHS**:
+
+ 
+
+ 1. **Collection**
+ Enter or select the source document collection from the dropdown.
+ 2. **Embeddings source**
+ Select `Paths` to define the source content by document properties.
+ 3. **Path configuration**
+ Specify which document properties to extract text from, and how the text should be chunked into embeddings.
+
+ * **Source text path**
+ Enter the property name from the document that contains the text for embedding generation.
+ * **Chunking method**
+ Select the method for splitting the source text into chunks.
+ Learn more in [Chunking methods and tokens](../../../ai-integration/generating-embeddings/embeddings-generation-task.mdx#chunking-methods-and-tokens).
+ * **Max tokens per chunk**
+ Enter the maximum number of tokens allowed per chunk (this depends on the service provider).
+ * **Overlap tokens**
+ Enter the number of tokens to repeat at the start of each chunk from the end of the previous one.
+ This helps preserve context between chunks by carrying over some tokens from one to the next.
+ Applies only to the _"Plain Text: Split Paragraphs"_ and _"Markdown: Split Paragraphs"_ chunking methods.
+
+ 4. **Add path configuration**
+ Click to add the specified to the list.
+ 5. **List of paths**
+ Displays the document properties you added for embedding generation.
+
+* **Define the embeddings source - using SCRIPT**:
+
+ 
+
+ 1. **Embeddings source**
+ Select `Script` to define the source content and chunking methods using a JavaScript script.
+ 2. **Script**
+ Refer to section [Chunking methods and tokens](../../../ai-integration/generating-embeddings/embeddings-generation-task.mdx#chunking-methods-and-tokens) for available JavaScript methods.
+ 3. **Default chunking method**
+ The selected chunking method will be used by default when no method is specified in the script.
+ e.g., when the script contains: `Name: this.Name`.
+ 4. **Default max tokens per chunk**:
+ Enter the default value to use when no specific value is set for the chunking method in the script.
+ This is the maximum number of tokens allowed per chunk (depends on the service provider).
+ 5. **Default overlap tokens**
+ Enter the default value to use when no specific value is set for the chunking method in the script.
+ This is the number of tokens to repeat at the start of each chunk from the end of the previous one.
+ Applies only to the _"Plain Text: Split Paragraphs"_ and _"Markdown: Split Paragraphs"_ chunking methods.
+
+* **Define quantization and expiration -
+ for the generated embeddings from the source documents**:
+
+ 
+
+ 1. **Quantization**
+ Select the quantization method that RavenDB will apply to embeddings received from the service provider.
+ Available options:
+ * Single (no quantization)
+ * Int8
+ * Binary
+ 2. **Embeddings cache expiration**
+ Set the expiration period for documents stored in the `@embeddings-cache` collection.
+ These documents contain embeddings generated from the source documents, serving as a cache for these embeddings.
+ The default initial period is `90` days. This period may be extended when the source documents change.
+ Learn more in [The embeddings cache collection](../../../ai-integration/generating-embeddings/embedding-collections.mdx#the-embeddings-cache-collection).
+ 3. **Regenerate embeddings**
+ This toggle is visible only when editing an existing task.
+ Toggle ON to regenerate embeddings for all documents in the collection, as specified by the _Paths_ or _Script_.
+
+* **Define chunking method & expiration -
+ for the embedding generated from a search term in a vector search query**:
+
+ 
+
+ 1. **Querying**
+ This label indicates that this section configures parameters only for embeddings
+ generated by the task for **search terms** in vector search queries.
+ 2. **Chunking method**
+ Select the method for splitting the search term into chunks.
+ Learn more in [Chunking methods and tokens](../../../ai-integration/generating-embeddings/embeddings-generation-task.mdx#chunking-methods-and-tokens).
+ 3. **Max tokens per chunk**
+ Enter the maximum number of tokens allowed per chunk (this depends on the service provider).
+ 4. **Embeddings cache expiration**
+ Set the expiration period for documents stored in the `@embeddings-cache` collection.
+ These documents contain embeddings generated from the search terms, serving as a cache for these embeddings.
+ The default period is `14` days. Learn more in [The embeddings cache collection](../../../ai-integration/generating-embeddings/embedding-collections.mdx#the-embeddings-cache-collection).
+
+## Configuring an embeddings generation task - from the Client API
+
+
+
+#### Configure an embeddings generation task - define source using PATHS:
+
+
+```csharp
+// Define a connection string that will be used in the task definition:
+// ====================================================================
+
+var connectionString = new AiConnectionString
+{
+ // Connection string name & identifier
+ Name = "ConnectionStringToOpenAI",
+ Identifier = "id-for-open-ai-connection-string",
+
+ // OpenAI connection settings
+ OpenAiSettings = new OpenAiSettings(
+ apiKey: "your-api-key",
+ endpoint: "https://api.openai.com/v1",
+ model: "text-embedding-3-small")
+};
+
+// Deploy the connection string to the server:
+// ===========================================
+var putConnectionStringOp =
+ new PutConnectionStringOperation(connectionString);
+var putConnectionStringResult = store.Maintenance.Send(putConnectionStringOp);
+
+// Define the embeddings generation task:
+// ======================================
+var embeddingsTaskConfiguration = new EmbeddingsGenerationConfiguration
+{
+ // General info:
+ Name = "GetEmbeddingsFromOpenAI",
+ Identifier = "id-for-task-open-ai",
+ ConnectionStringName = "ConnectionStringToOpenAI",
+ Disabled = false,
+
+ // Embeddings source & chunking methods - using PATHS configuration:
+ Collection = "Categories",
+ EmbeddingsPathConfigurations = [
+ new EmbeddingPathConfiguration() {
+ Path = "Name",
+ ChunkingOptions = new()
+ {
+ ChunkingMethod = ChunkingMethod.PlainTextSplit,
+ MaxTokensPerChunk = 2048
+ }
+ },
+ new EmbeddingPathConfiguration()
+ {
+ Path = "Description",
+ ChunkingOptions = new()
+ {
+ ChunkingMethod = ChunkingMethod.PlainTextSplitParagraphs,
+ MaxTokensPerChunk = 2048,
+
+ // 'OverlapTokens' is only applicable when ChunkingMethod is
+ // 'PlainTextSplitParagraphs' or 'MarkDownSplitParagraphs'
+ OverlapTokens = 128
+ }
+ },
+ ],
+
+ // Quantization & expiration -
+ // for embeddings generated from source documents:
+ Quantization = VectorEmbeddingType.Single,
+ EmbeddingsCacheExpiration = TimeSpan.FromDays(90),
+
+ // Chunking method and expiration -
+ // for the embeddings generated from search term in vector search query:
+ ChunkingOptionsForQuerying = new()
+ {
+ ChunkingMethod = ChunkingMethod.PlainTextSplit,
+ MaxTokensPerChunk = 2048
+ },
+
+ EmbeddingsCacheForQueryingExpiration = TimeSpan.FromDays(14)
+};
+
+// Deploy the embeddings generation task to the server:
+// ====================================================
+var addEmbeddingsGenerationTaskOp =
+ new AddEmbeddingsGenerationOperation(embeddingsTaskConfiguration);
+var addAiIntegrationTaskResult = store.Maintenance.Send(addEmbeddingsGenerationTaskOp);
+```
+
+
+
+
+
+
+#### Configure an embeddings generation task - define source using SCRIPT:
+
+* To configure the source content using a script -
+ use the `EmbeddingsTransformation` object instead of the `EmbeddingsPathConfigurations` object.
+
+* The rest of the configuration properties are the same as in the example above.
+
+* Call `embeddings.generate(object)` within the script and apply the appropriate text-splitting methods to each field inside the object.
+ Each KEY in the object represents a document field, and the VALUE is a text-splitting function that processes the field's content before generating embeddings.
+
+* These methods ensure that the text chunks derived from document fields stay within the token limits required by the provider, preventing request rejection.
+ Learn more in [Chunking methods and tokens](../../../ai-integration/generating-embeddings/embeddings-generation-task.mdx#chunking-methods-and-tokens).
+
+* For example:
+
+
+```csharp
+// Source collection:
+Collection = "Categories",
+
+// Use 'EmbeddingsTransformation':
+EmbeddingsTransformation = new EmbeddingsTransformation()
+{
+ // Define the script:
+ Script =
+ @"embeddings.generate({
+
+ // Process the document 'Name' field using method text.split().
+ // The text content will be split into chunks of up to 2048 tokens.
+ Name: text.split(this.Name, 2048),
+
+ // Process the document 'Description' field using method text.splitParagraphs().
+ // The text content will be split into chunks of up to 2048 tokens.
+ // 128 overlapping tokens will be repeated at the start of each chunk
+ // from the end of the previous one.
+ Description: text.splitParagraphs(this.Description, 2048, 128)
+ });"
+},
+```
+
+
+* If no chunking method is provided in the script, you can set default values as follows:
+
+
+```csharp
+Collection = "Categories",
+EmbeddingsTransformation = new EmbeddingsTransformation()
+{
+ Script =
+ @"embeddings.generate({
+
+ // No chunking method is specified here
+ Name: this.Name,
+ Description: this.Description
+ });",
+
+ // Specify the default chunking options to use in the script
+ ChunkingOptions = new ChunkingOptions()
+ {
+ ChunkingMethod = ChunkingMethod.PlainTextSplit,
+ MaxTokensPerChunk = 2048
+ }
+},
+```
+
+
+
+
+## Chunking methods and tokens
+
+**Tokens and processing limits**:
+
+* A token is the fundamental unit that Large Language Models (LLMs) use to process text.
+ AI service providers that generate embeddings from text enforce token limits for each processed text part.
+ If a text exceeds the provider’s limit, it may be truncated or rejected.
+
+**Using chunking methods**:
+
+* To handle lengthy text, you can define chunking strategies in the task definition and specify the desired number of tokens per chunk.
+ Chunking splits large input texts into smaller, manageable chunks, each containing no more than the specified maximum number of tokens.
+
+* The maximum number of tokens per chunk depends on the AI service provider and the specific model defined in the [connection string](../../../ai-integration/connection-strings/connection-strings-overview.mdx).
+ While RavenDB does not tokenize text, it estimates the number of tokens for chunking purposes by dividing the text length by 4.
+
+* The AI provider generates a single embedding for each chunk.
+ Depending on the maximum tokens per chunk setting, a single input text may result in multiple embeddings.
+
+**Available chunking methods**:
+
+RavenDB offers several chunking methods that can be applied per source type.
+These methods determine how input text is split before being sent to the provider.
+
+
+
+* `PlainText: Split`
+ Splits a plain text string into multiple chunks based on the specified maximum token count.
+ Estimates token lengths based on an average of 4 characters per token and applies a 0.75 ratio to determine chunk sizes.
+ Ensures that words are not split mid-way when forming chunks.
+
+ **Applies to**:
+ Fields containing plain text strings.
+ **Return Value**:
+ A list of text chunks (strings), where each chunk approximates the specified maximum token count without breaking words.
+
+* `PlainText: Split Lines`
+ Uses the Semantic Kernel _SplitPlainTextLines_ method.
+ Splits a plain text string into individual lines based on line breaks and whitespace while ensuring that each line does not exceed the specified maximum token limit.
+
+ **Applies to**:
+ Fields containing an array of plain text strings.
+ **Return value**:
+ A list of text segments (lines) derived from the original input, preserving line structure while ensuring token constraints.
+
+* `PlainText: Split Paragraphs`
+ Uses the Semantic Kernel _SplitPlainTextParagraphs_ method.
+ Combines consecutive lines to form paragraphs while ensuring each paragraph is as complete as possible without exceeding the specified token limit.
+ Optionally, set an overlap between chunks using the _overlapTokens_ parameter, which repeats the last _n_ tokens from one chunk at the start of the next.
+ This helps preserve context continuity across paragraph boundaries.
+
+ **Applies to**:
+ Fields containing an array of plain text strings.
+ **Return value**:
+ A list of paragraphs, where each paragraph consists of grouped lines that preserve readability without exceeding the token limit.
+
+* `Markdown: Split Lines`
+ Uses the Semantic Kernel _SplitMarkDownLines_ method.
+ Splits markdown content into individual lines at line breaks while ensuring that each line remains within the specified token limit.
+ Preserves markdown syntax, ensuring each line remains an independent, valid segment.
+
+ **Applies to**:
+ Fields containing strings with markdown content.
+ **Return value**:
+ A list of markdown lines, each respecting the token limit while maintaining the original formatting.
+
+* `Markdown: Split Paragraphs`
+ Uses the Semantic Kernel _SplitMarkdownParagraphs_ method.
+ Groups lines into coherent paragraphs at designated paragraph breaks while ensuring each paragraph remains within the specified token limit.
+ Markdown formatting is preserved.
+ Optionally, set an overlap between chunks using the _overlapTokens_ parameter, which repeats the last _n_ tokens from one chunk at the start of the next.
+ This helps preserve context continuity across paragraph boundaries.
+
+
+ **Applies to**:
+ Fields containing an array of strings with markdown content.
+ **Return value**:
+ A list of markdown paragraphs, each respecting the token limit and maintaining structural integrity.
+
+* `HTML: Strip`
+ Removes HTML tags from the content and splits the resulting plain text into chunks based on a specified token limit.
+
+ **Applies to**:
+ Fields containing strings with HTML.
+ **Return value**:
+ A list of text chunks derived from the stripped content, ensuring each chunk remains within the token limit.
+
+
+**Chunking method syntax for the JavaScript scripts**:
+
+
+```javascript
+// Available text-splitting methods:
+// =================================
+
+// Plain text methods:
+text.split(text | [text], maxTokensPerLine);
+text.splitLines(text | [text], maxTokensPerLine);
+text.splitParagraphs(line | [line], maxTokensPerLine, overlapTokens?);
+
+// Markdown methods:
+markdown.splitLines(text | [text], maxTokensPerLine);
+markdown.splitParagraphs(line | [line], maxTokensPerLine, overlapTokens?);
+
+// HTML processing:
+html.strip(htmlText | [htmlText], maxTokensPerChunk);
+```
+
+
+| Parameter | Type | Description |
+|------------------------------------------|-----------|------------------------------------------------------------------ |
+| **text** | `string` | A plain text or markdown string to split. |
+| **line** | `string` | A single line or paragraph of text. |
+| **[text] / [line]** | `string[]`| An array of text or lines to split into chunks. |
+| **htmlText** | `string` | A string containing HTML content to process. |
+| **maxTokensPerChunk / maxTokensPerLine** | `number` | The maximum number of tokens allowed per chunk. Default is `512`. |
+| **overlapTokens** | `number` (optional) | The number of tokens to overlap between consecutive chunks. Helps preserve context continuity across chunks (e.g., between paragraphs). Default is `0`. |
+
+## Syntax
+
+#### The embeddings generation task configuration:
+
+
+```csharp
+// The 'EmbeddingsGenerationConfiguration' class inherits from 'EtlConfiguration'
+// and provides the following specialized configurations for the embeddings generation task:
+// =========================================================================================
+
+public class EmbeddingsGenerationConfiguration : EtlConfiguration
+{
+ public string Identifier { get; set; }
+ public string Collection { get; set; }
+ public List EmbeddingsPathConfigurations { get; set; }
+ public EmbeddingsTransformation EmbeddingsTransformation { get; set; }
+ public VectorEmbeddingType Quantization { get; set; }
+ public ChunkingOptions ChunkingOptionsForQuerying { get; set; }
+ public TimeSpan EmbeddingsCacheExpiration { get; set; } = TimeSpan.FromDays(90);
+ public TimeSpan EmbeddingsCacheForQueryingExpiration { get; set; } = TimeSpan.FromDays(14);
+}
+```
+
+
+| Parameter | Type | Description |
+|------------------------------------------|------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **Identifier** | `string` | The identifier of the embeddings generation task. |
+| **Collection** | `string` | The name of the source collection from which documents are processed for embeddings generation. |
+| **EmbeddingsPathConfigurations** | `List` | A list of properties inside documents that contain text to be embedded, along with their chunking settings. |
+| **EmbeddingsTransformation** | `EmbeddingsTransformation ` | An object that contains a script defining the transformations and processing applied to the source text before generating embeddings. |
+| **Quantization** | `VectorEmbeddingType ` | The quantization type for the generated embeddings. |
+| **ChunkingOptionsForQuerying** | `ChunkingOptions ` | The chunking method and maximum token limit used when processing search terms in vector search queries. |
+| **EmbeddingsCacheExpiration** | `TimeSpan ` | The expiration period for documents in the [Embedding cache collection](../../../ai-integration/generating-embeddings/embedding-collections.mdx#the-embeddings-cache-collection) that contain embeddings generated from source documents. |
+| **EmbeddingsCacheForQueryingExpiration** | `TimeSpan ` | The expiration period for documents in the embedding cache collection that contain embeddings generated from search terms in vector search queries. |
+
+
+```csharp
+public class EmbeddingPathConfiguration
+{
+ public string Path { get; set; }
+ public ChunkingOptions ChunkingOptions { get; set; }
+}
+
+public class ChunkingOptions
+{
+ public ChunkingMethod ChunkingMethod { get; set; } // Default is PlainTextSplit
+ public int MaxTokensPerChunk { get; set; } = 512;
+
+ // 'OverlapTokens' is only applicable when ChunkingMethod is
+ // 'PlainTextSplitParagraphs' or 'MarkDownSplitParagraphs'
+ public int OverlapTokens { get; set; } = 0;
+}
+
+public enum ChunkingMethod
+{
+ PlainTextSplit,
+ PlainTextSplitLines,
+ PlainTextSplitParagraphs,
+ MarkDownSplitLines,
+ MarkDownSplitParagraphs,
+ HtmlStrip
+}
+
+public class EmbeddingsTransformation
+{
+ public string Script { get; set; }
+ public ChunkingOptions ChunkingOptions {get; set;}
+}
+
+public enum VectorEmbeddingType
+{
+ Single,
+ Int8,
+ Binary,
+ Text
+}
+```
+
+
+#### Deploying the embeddings generation task:
+
+
+```csharp
+public AddEmbeddingsGenerationOperation(EmbeddingsGenerationConfiguration configuration);
+```
+
diff --git a/versioned_docs/version-7.1/ai-integration/generating-embeddings/content/_overview-csharp.mdx b/versioned_docs/version-7.1/ai-integration/generating-embeddings/content/_overview-csharp.mdx
new file mode 100644
index 0000000000..51906d1ad7
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/generating-embeddings/content/_overview-csharp.mdx
@@ -0,0 +1,185 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* RavenDB can serve as a vector database, see [Why choose RavenDB as your vector database](../../../ai-integration/vector-search/ravendb-as-vector-database.mdx#why-choose-ravendb-as-your-vector-database).
+
+* Vector search can be performed on:
+ * Raw text stored in your documents.
+ * Pre-made embeddings that you created yourself and stored using these [Data types](../../../ai-integration/vector-search/data-types-for-vector-search.mdx#numerical-data).
+ * Pre-made embeddings that are automatically generated from your document content by RavenDB's
+ **embeddings generation tasks** using external service providers, as explained below.
+* In this article:
+ * [Embeddings generation - overview](../../../ai-integration/generating-embeddings/overview.mdx#embeddings-generation---overview)
+ * [Embeddings generation - process flow](../../../ai-integration/generating-embeddings/overview.mdx#embeddings-generation---process-flow)
+ * [Supported providers](../../../ai-integration/generating-embeddings/overview.mdx#supported-providers)
+ * [Creating an embeddings generation task](../../../ai-integration/generating-embeddings/overview.mdx#creating-an-embeddings-generation-task)
+ * [Monitoring the tasks](../../../ai-integration/generating-embeddings/overview.mdx#monitoring-the-tasks)
+ * [Get embeddings generation task details](../../../ai-integration/generating-embeddings/overview.mdx#get-embeddings-generation-task-details)
+
+
+
+## Embeddings generation - overview
+
+
+
+#### Embeddings generation - process flow
+
+* **Define an Embeddings Generation Task**:
+ Specify a [connection string](../../../ai-integration/connection-strings/connection-strings-overview.mdx) that defines the AI provider and model for generating embeddings.
+ Define the source content - what parts of the documents will be used to create the embeddings.
+
+* **Source content is processed**:
+ 1. The task extracts the specified content from the documents.
+ 2. If a processing script is defined, it transforms the content before further processing.
+ 3. The text is split according to the defined chunking method; a separate embedding will be created for each chunk.
+ 4. Before contacting the provider, RavenDB checks the [embeddings cache](../../../ai-integration/generating-embeddings/embedding-collections.mdx#the-embeddings-cache-collection)
+ to determine whether an embedding already exists for the given content from that provider.
+ 5. If a matching embedding is found, it is reused, avoiding unnecessary requests.
+ If no cached embedding is found, the transformed and chunked content is sent to the configured AI provider.
+
+* **Embeddings are generated by the AI provider**:
+ The provider generates embeddings and sends them back to RavenDB.
+ If quantization was defined in the task, RavenDB applies it to the embeddings before storing them.
+
+* **Embeddings are stored in your database**:
+ * Each embedding is stored as an attachment in a [dedicated collection](../../../ai-integration/generating-embeddings/embedding-collections.mdx#the-embeddings-collection).
+ * RavenDB maintains an [embeddings cache](../../../ai-integration/generating-embeddings/embedding-collections.mdx#the-embeddings-cache-collection),
+ allowing reuse of embeddings for the same source content and reducing provider calls.
+ Cached embeddings expire after a configurable duration.
+
+* **Perform vector search:**
+ Once the embeddings are stored, you can perform vector searches on your document content by:
+ * Running a [dynamic query](../../../ai-integration/vector-search/vector-search-using-dynamic-query.mdx#querying-pre-made-embeddings-generated-by-tasks), which automatically creates an auto-index for the search.
+ * Defining a [static index](../../../ai-integration/vector-search/vector-search-using-static-index.mdx#indexing-pre-made-text-embeddings) to store and query embeddings efficiently.
+
+ The query search term is split into chunks, and each chunk is looked up in the cache.
+ If not found, RavenDB requests an embedding from the provider and caches it.
+ The embedding (cached or newly created) is then used to compare against stored vectors.
+
+* **Continuous processing**:
+ * Embeddings generation tasks are [Ongoing Tasks](../../../studio/database/tasks/ongoing-tasks/general-info.mdx) that process documents as they change.
+ Before contacting the provider after a document change, the task first checks the cache to see if a matching embedding already exists, avoiding unnecessary requests.
+ * The requests to generate embeddings from the source text are sent to the provider in batches.
+ The batch size is configurable, see the [Ai.Embeddings.MaxBatchSize](../../../server/configuration/ai-integration-configuration.mdx#aiembeddingsmaxbatchsize) configuration key.
+ * A failed embeddings generation task will retry after the duration set in the
+ [Ai.Embeddings.MaxFallbackTimeInSec](../../../server/configuration/ai-integration-configuration.mdx#aiembeddingsmaxfallbacktimeinsec) configuration key.
+
+
+
+
+
+#### Supported providers
+
+* The following service providers are supported for auto-generating embeddings using tasks:
+
+ * [OpenAI & OpenAI-compatible providers](../../../ai-integration/connection-strings/open-ai.mdx)
+ * [Azure Open AI](../../../ai-integration/connection-strings/azure-open-ai.mdx)
+ * [Google AI](../../../ai-integration/connection-strings/google-ai.mdx)
+ * [Vertex AI](../../../ai-integration/connection-strings/vertex-ai.mdx)
+ * [Hugging Face](../../../ai-integration/connection-strings/hugging-face.mdx)
+ * [Ollama](../../../ai-integration/connection-strings/ollama.mdx)
+ * [Mistral AI](../../../ai-integration/connection-strings/mistral-ai.mdx)
+ * [bge-micro-v2](../../../ai-integration/connection-strings/embedded.mdx) (a local embedded model within RavenDB)
+
+
+
+
+
+
+
+## Creating an embeddings generation task
+
+* An embeddings generation tasks can be created from:
+ * The **AI Tasks view in the Studio**, where you can create, edit, and delete tasks. Learn more in [AI Tasks - list view](../../../ai-integration/ai-tasks-list-view.mdx).
+ * The **Client API** - see [Configuring an embeddings generation task - from the Client API](../../../ai-integration/generating-embeddings/embeddings-generation-task.mdx#configuring-an-embeddings-generation-task---from-the-client-api)
+* From the Studio:
+
+ 
+
+ 1. Go to the **AI Hub** menu.
+ 2. Open the **AI Tasks** view.
+ 3. Click **Add AI Task** to add a new task.
+
+ 
+
+* See the complete details of the task configuration in the [Embeddings generation task](../../../ai-integration/generating-embeddings/embeddings-generation-task.mdx) article.
+
+## Monitoring the tasks
+
+* The status and state of each embeddings generation task are visible in the [AI Tasks - list view](../../../ai-integration/ai-tasks-list-view.mdx).
+
+* Task performance and activity over time can be analyzed in the _AI Tasks Stats_ view,
+ where you can track processing duration, batch sizes, and overall progress.
+ Learn more about the functionality of the stats view in the [Ongoing Tasks Stats](../../../studio/database/stats/ongoing-tasks-stats/overview.mdx) article.
+
+* The number of embeddings generation tasks across all databases can also be monitored using [SNMP](../../../server/administration/snmp/snmp-overview.mdx).
+ The following SNMP OIDs provide relevant metrics:
+ * [5.1.11.25](../../../server/administration/snmp/snmp-overview.mdx#511125) – Total number of enabled embeddings generation tasks.
+ * [5.1.11.26](../../../server/administration/snmp/snmp-overview.mdx#511126) – Total number of active embeddings generation tasks.
+
+## Get embeddings generation task details
+
+* Besides viewing the list of tasks in the [AI Tasks - list view](../../../ai-integration/ai-tasks-list-view.mdx) in the Studio,
+ you can also retrieve embeddings generation task details programmatically.
+
+* This is useful when issuing a vector search query that references an embeddings generation task,
+ where it's important to verify that the task exists beforehand. For example:
+ * when [Querying pre-made embeddings generated by tasks](../../../ai-integration/vector-search/vector-search-using-dynamic-query#querying-pre-made-embeddings-generated-by-tasks)
+ * or when [Indexing numerical data and querying using text input](../../../ai-integration/vector-search/vector-search-using-static-index#indexing-numerical-data-and-querying-using-text-input)
+
+* There are two ways to check if an embeddings generation task exists:
+ * Using `GetOngoingTaskInfoOperation`.
+ * Accessing the full list of embeddings generation tasks from the database record.
+
+
+
+
+```csharp
+// Define the get task operation, pass the task NAME
+var getOngoingTaskOp =
+ new GetOngoingTaskInfoOperation("theEmbeddingsGenerationTaskName", OngoingTaskType.EmbeddingsGeneration);
+
+// Execute the operation by by passing it to Maintenance.Send
+// Explicitly cast the result to the "EmbeddingsGeneration" type
+var task = (EmbeddingsGeneration)store.Maintenance.Send(getOngoingTaskOp);
+
+// Verify the task exists
+if (task != null)
+{
+ // Access any of the task details
+ var taskStatus = task.TaskState;
+
+ // Access the task identifier
+ var taskIdentifier = task.Configuration.Identifier;
+}
+```
+
+
+```csharp
+// Define the get database record operation, pass your database name
+var getDatabaseRecordOp = new GetDatabaseRecordOperation("yourDatabaseName");
+
+// Execute the operation by passing it to Maintenance.Send
+var dbRecord = store.Maintenance.Server.Send(getDatabaseRecordOp);
+
+// Access the list of embeddings generation tasks
+var tasks = dbRecord.EmbeddingsGenerations;
+
+if (tasks.Count > 0)
+{
+ // Access the first task
+ var task = tasks[0];
+
+ // Access any of the task details
+ var isTaskDisabled = task.Disabled;
+
+ // Access the task identifier
+ var taskIdentifier = task.Identifier;
+}
+```
+
+
diff --git a/versioned_docs/version-7.1/ai-integration/generating-embeddings/embedding-collections.mdx b/versioned_docs/version-7.1/ai-integration/generating-embeddings/embedding-collections.mdx
new file mode 100644
index 0000000000..cb25679e5a
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/generating-embeddings/embedding-collections.mdx
@@ -0,0 +1,214 @@
+---
+title: "The Embedding Collections"
+hide_table_of_contents: true
+sidebar_label: The Embedding Collections
+sidebar_position: 3
+---
+
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+# The Embedding Collections
+
+
+* The embeddings generated by the providers are stored as **attachments** in your database.
+ Each attachment contains a single embedding.
+
+* The server creates the following dedicated collections,
+ which contain documents that reference the embedding attachments:
+ * **Embeddings Collections**
+ * **Embeddings Cache Collection**
+
+* This article describes these custom-designed collections.
+ It is recommended to first refer to this [Overview](../../ai-integration/generating-embeddings/overview.mdx#embeddings-generation---overview)
+ to understand the embeddings generation process flow.
+* In this article:
+ * [The embeddings collections](../../ai-integration/generating-embeddings/embedding-collections.mdx#the-embeddings-collections)
+ * [The embeddings cache collection](../../ai-integration/generating-embeddings/embedding-collections.mdx#the-embeddings-cache-collection)
+
+
+## The embeddings collections
+
+* RavenDB creates a separate embeddings collection for each source collection from which embeddings are generated.
+ The naming format for these collections is: `@embeddings/`.
+
+* Each document in the embeddings collection references ALL embeddings generated from
+ the content of the corresponding document in the source collection by any defined embeddings generation task.
+
+* The document structure in the embeddings collection is:
+
+
+
+{`\{
+ "identifier-of-task-1": \{
+ "@quantization": "",
+ "Property1": [
+ "Hash of the embedding vector generated for 1st text chunk of Property1's content",
+ "Hash of the embedding vector generated for 2nd text chunk of Property1's content",
+ "Hash of the embedding vector generated for 3rd text chunk of Property1's content",
+ "..."
+ ],
+ "Property2": [
+ "Hash of the embedding vector generated for 1st text chunk of Property2's content",
+ "..."
+ ]
+ \},
+ "identifier-of-task-2": \{
+ "Property3": [
+ "Hash of the embedding vector generated for 1st text chunk of Property3's content",
+ "..."
+ ]
+ \},
+ "Other-tasks...": \{
+ ...
+ \},
+ "@metadata": \{
+ "@collection": "/embeddings",
+ "@flags": "HasAttachments"
+ \}
+\}
+`}
+
+
+* For example:
+ In this [task definition](../../ai-integration/generating-embeddings/embeddings-generation-task.mdx#configuring-an-embeddings-generation-task---from-the-studio),
+ an embeddings generation task is defined on the `Categories` collection.
+ This creates the `@embeddings/Categories` collection, where a document will look as follows:
+
+ 
+
+ 1. **Collection name**
+ The unique name of the embeddings collection: `@embeddings/Categories`.
+ 2. **Document ID**
+ Each document ID in this collection follows the format: `embeddings/`
+ 3. **Task identifier**
+ The identifier of the task that generated the embeddings for the listed properties.
+ 4. **Quantization type**
+ The quantization method applied by the task when generating the embeddings.
+ 5. **Source properties & their hashes**:
+ This section contains properties from the source document whose content was converted into embeddings.
+ Each property holds an array of Base64 hashes.
+ Each hash is derived from the content of an embedding vector generated for a text chunk from the property's content:
+ `: [`
+ `,`
+ `,`
+ `...`
+ `]`
+ 6. **Attachment flag**
+ Indicates that the document includes attachments, which store the embeddings.
+ The next image shows the embedding attachments in the document's properties pane.
+
+ 
+
+ * Each attachment contains a **single embedding**.
+
+ * The **attachment name** is the Base64 hash derived from the content of the embedding vector stored in the attachment:
+ ``
+
+
+
+## The embeddings cache collection
+
+
+
+#### Cache contents
+* In addition to creating embeddings collections for each source collection,
+ RavenDB creates and maintains a single **embeddings cache collection** named: `@embeddings-cache`.
+
+* This cache collection contains embeddings generated by all providers,
+ both from source documents and from search terms used in vector search queries.
+
+* Each document in the `@embeddings-cache` collection references a **single attachment** that contains a single embedding vector.
+ **The document ID includes**:
+ * The [connection string identifier](../../ai-integration/connection-strings/connection-strings-overview.mdx#creating-an-ai-connection-string),
+ which specifies the provider and model that generated the embedding.
+ * A Base64 hash generated from a text chunk value - either from a source document property or from a search term.
+ * If the embedding was quantized by the task, the document ID also includes the quantization type.
+
+
+
+
+#### Cache lookup
+* Before making a request to a text embedding provider,
+ RavenDB first checks the `@embeddings-cache` collection to determine whether an embedding for the given input already exists from the same provider.
+
+* This applies both when generating embeddings for source document content and when performing a vector search that requires an embedding for the search term.
+
+* To find a matching embedding, RavenDB:
+ 1. **Generates a hash** from the chunked text content that requires embedding.
+ 2. **Identifies the provider** the user is working with, based on the specified connection string.
+ 3. **Compares these values** (the connection string identifier and the hash) with those stored in the cache collection.
+ (Each document in `@embeddings-cache` has an ID that includes these two components).
+ 4. If a document with a matching ID exists in the cache,
+ RavenDB **retrieves the corresponding embedding** instead of generating a new one.
+
+
+
+
+#### Cache performance benefits
+* **Reduced latency**:
+ Reusing cached embeddings eliminates the need for additional provider requests, improving response time.
+
+* **Lower provider costs**:
+ Since embedding providers often charge per request, caching prevents redundant calls and reduces expenses.
+
+* **Optimized vector search**:
+ If a cached embedding exists for the search term in the query, the search runs faster by skipping unnecessary processing.
+
+
+
+
+#### Expiration policy
+* **The expiration date**:
+ Each document in this cache collection is created with an expiration date, which is set according to the expiration period defined in the embeddings generation task.
+ Once the expiration date is reached, the document is automatically deleted (provided that [document expiration](../../studio/database/settings/document-expiration.mdx) is enabled).
+
+* **Extending the expiration period**:
+ * When a source document (from which embeddings were generated) is modified - even if the change is not to a property used for embeddings -
+ RavenDB checks the expiration of the matching document in the cache collection.
+ If the remaining time is less than half of the original duration, RavenDB extends the expiration by the period defined in the task.
+ * When you make a vector search query and an embedding generated from a chunk of the search term already exists in the cache,
+ RavenDB also extends the expiration of the matching document by the period defined in the query settings of the embeddings generation task.
+
+
+* **The @embeddings-cache collection**:
+
+ 
+
+ 1. **Collection name**
+ The name of the embeddings cache collection: `@embeddings-cache`.
+
+ 2. **Connection string identifier**
+ The document ID includes the connection string identifier, which specifies the provider that generated the embedding.
+
+ 3. **Hash**
+ The document ID includes a Base64 hash created from a text chunk -
+ either from a source document property or from a search term in a vector search query.
+* **A document in the @embeddings-cache collection**:
+
+ 
+
+ 1. **Document ID**
+ The document ID follows this format:
+ `embeddings-cache//`
+
+ If the embedding was [quantized](../../ai-integration/vector-search/vector-search-using-dynamic-query.mdx#what-is-quantization) by the task
+ using a type other than _Single_ (e.g., _Int8_ or _Binary_),
+ the ID format includes the quantization type:
+ `embeddings-cache///`
+
+ 2. **Expiration time**
+ The document is removed when the expiration time is reached.
+* **The embedding attachment**:
+
+ 
+
+ * The name of the attachment is the hash string:
+ ``
+
+
+
diff --git a/versioned_docs/version-7.1/ai-integration/generating-embeddings/embeddings-generation-task.mdx b/versioned_docs/version-7.1/ai-integration/generating-embeddings/embeddings-generation-task.mdx
new file mode 100644
index 0000000000..6c8d44a7d4
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/generating-embeddings/embeddings-generation-task.mdx
@@ -0,0 +1,40 @@
+---
+title: "The Embeddings Generation Task"
+hide_table_of_contents: true
+sidebar_label: The Embeddings Generation Task
+sidebar_position: 2
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import EmbeddingsGenerationTaskCsharp from './content/_embeddings-generation-task-csharp.mdx';
+
+export const supportedLanguages = ["csharp"];
+
+
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/ai-integration/generating-embeddings/embeddings-generation_start.mdx b/versioned_docs/version-7.1/ai-integration/generating-embeddings/embeddings-generation_start.mdx
new file mode 100644
index 0000000000..43462027a4
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/generating-embeddings/embeddings-generation_start.mdx
@@ -0,0 +1,60 @@
+---
+title: "Generating embeddings: Start"
+hide_table_of_contents: true
+sidebar_label: Start
+sidebar_position: 0
+---
+
+import Admonition from '@theme/Admonition';
+
+import CardWithImage from "@site/src/components/Common/CardWithImage";
+import CardWithImageHorizontal from "@site/src/components/Common/CardWithImageHorizontal";
+import ColGrid from "@site/src/components/ColGrid";
+import embedGenStartApiImage from "./assets/embeddings-generation_start_api-image.png";
+import embedGenStartStudioImage from "./assets/embeddings-generation_start_studio-image.png";
+import aiSearchArticleImage from "./assets/ai-search-article-cover.webp";
+
+import ayendeBlogImage from "@site/static/img/from-ayende-com.webp";
+import webinarThumbnailPlaceholder from "@site/static/img/webinar.webp";
+import discordLargeThumbnailPlaceholder from "@site/static/img/discord-lg.webp";
+
+# Generating embeddings
+
+### Create embeddings to enable AI-powered similarity search.
+[Embeddings](https://en.wikipedia.org/wiki/Embedding_(machine_learning)) are numeric vectors that you can create for data (like a text or an image) to capture meanings, contexts, or relationships related to the data. You can then search the data by running intelligent queries over its embeddings using [vector search](../../ai-integration/vector-search/vector-search_start) to find content by similarity rather than exact match.
+- RavenDB allows you to create embeddings using native [ongoing embeddings-generation tasks](../../ai-integration/generating-embeddings/embeddings-generation-task) that systematically process document collections and convert document fields (like texts or arrays) into embeddings. To create the embeddings, the tasks can use either an external AI model (such as OpenAI) or RavenDB's default embedding model.
+- You can also create embeddings using external embeddings providers and store them in your database (e.g., to handle other content types such as images).
+- You can avoid pre-generating embeddings, and let vector search operations generate embeddings on-the-fly, while searching.
+- Embeddings can be used by other RavenDB AI features. E.g., [AI agents](../../ai-integration/ai-agents/ai-agents_start) can use vector search to retrieve relevant data requested by the LLM.
+
+### Use cases
+Embeddings generation tasks can be used to prepare your data for AI-powered search, analysis, and usage, e.g., for -
+* **Enterprise knowledge bases**
+ Generate embeddings for thousands of documents, policies, and procedures to enable instant semantic search
+* **Legal document libraries**
+ Process case law, contracts, and regulations to build searchable legal repositories
+* **Product catalogs**
+ Convert product descriptions, specifications, and reviews into embeddings for enhanced e-commerce search
+* **Content management systems**
+ Transform blog posts, articles, and marketing materials into searchable vector representations
+
+### Technical documentation
+Learn about generating, storing, and using embeddings in RavenDB.
+
+
+
+
+
+
+#### Learn more: In-depth embeddings generation articles
+
+
+
+
+
+### Related lives & Videos
+Learn more about enhancing your applications using vector search operations.
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/ai-integration/generating-embeddings/overview.mdx b/versioned_docs/version-7.1/ai-integration/generating-embeddings/overview.mdx
new file mode 100644
index 0000000000..5dd8ef569f
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/generating-embeddings/overview.mdx
@@ -0,0 +1,40 @@
+---
+title: "Generating Embeddings - Overview"
+hide_table_of_contents: true
+sidebar_label: "Overview"
+sidebar_position: 1
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import OverviewCsharp from './content/_overview-csharp.mdx';
+
+export const supportedLanguages = ["csharp"];
+
+
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/ai-integration/vector-search/_category_.json b/versioned_docs/version-7.1/ai-integration/vector-search/_category_.json
new file mode 100644
index 0000000000..d0ca16e140
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/vector-search/_category_.json
@@ -0,0 +1,4 @@
+{
+ "position": 0,
+ "label": "Vector Search"
+}
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/ai-integration/vector-search/assets/add-vector-field-1.png b/versioned_docs/version-7.1/ai-integration/vector-search/assets/add-vector-field-1.png
new file mode 100644
index 0000000000..ebccdbb278
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/vector-search/assets/add-vector-field-1.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/vector-search/assets/add-vector-field-2.png b/versioned_docs/version-7.1/ai-integration/vector-search/assets/add-vector-field-2.png
new file mode 100644
index 0000000000..4d99d382a4
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/vector-search/assets/add-vector-field-2.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/vector-search/assets/ai-image-search-with-ravendb.webp b/versioned_docs/version-7.1/ai-integration/vector-search/assets/ai-image-search-with-ravendb.webp
new file mode 100644
index 0000000000..f4ed44169a
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/vector-search/assets/ai-image-search-with-ravendb.webp differ
diff --git a/versioned_docs/version-7.1/ai-integration/vector-search/assets/json-document.png b/versioned_docs/version-7.1/ai-integration/vector-search/assets/json-document.png
new file mode 100644
index 0000000000..8634b1b803
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/vector-search/assets/json-document.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/vector-search/assets/snagit/view-auto-index-entries-1.snagx b/versioned_docs/version-7.1/ai-integration/vector-search/assets/snagit/view-auto-index-entries-1.snagx
new file mode 100644
index 0000000000..b11fc7784c
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/vector-search/assets/snagit/view-auto-index-entries-1.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/vector-search/assets/snagit/view-auto-index-entries-2.snagx b/versioned_docs/version-7.1/ai-integration/vector-search/assets/snagit/view-auto-index-entries-2.snagx
new file mode 100644
index 0000000000..cc88043a28
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/vector-search/assets/snagit/view-auto-index-entries-2.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/vector-search/assets/snagit/view-auto-index-entries-3.snagx b/versioned_docs/version-7.1/ai-integration/vector-search/assets/snagit/view-auto-index-entries-3.snagx
new file mode 100644
index 0000000000..b3cf8930fa
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/vector-search/assets/snagit/view-auto-index-entries-3.snagx differ
diff --git a/versioned_docs/version-7.1/ai-integration/vector-search/assets/vector-search-1.png b/versioned_docs/version-7.1/ai-integration/vector-search/assets/vector-search-1.png
new file mode 100644
index 0000000000..85ffaf211d
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/vector-search/assets/vector-search-1.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/vector-search/assets/vector-search-2.png b/versioned_docs/version-7.1/ai-integration/vector-search/assets/vector-search-2.png
new file mode 100644
index 0000000000..fa114f14a5
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/vector-search/assets/vector-search-2.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/vector-search/assets/view-auto-index-entries-1.png b/versioned_docs/version-7.1/ai-integration/vector-search/assets/view-auto-index-entries-1.png
new file mode 100644
index 0000000000..383867ae2a
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/vector-search/assets/view-auto-index-entries-1.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/vector-search/assets/view-auto-index-entries-2.png b/versioned_docs/version-7.1/ai-integration/vector-search/assets/view-auto-index-entries-2.png
new file mode 100644
index 0000000000..25c880ebbd
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/vector-search/assets/view-auto-index-entries-2.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/vector-search/assets/view-auto-index-entries-3.png b/versioned_docs/version-7.1/ai-integration/vector-search/assets/view-auto-index-entries-3.png
new file mode 100644
index 0000000000..bda8528061
Binary files /dev/null and b/versioned_docs/version-7.1/ai-integration/vector-search/assets/view-auto-index-entries-3.png differ
diff --git a/versioned_docs/version-7.1/ai-integration/vector-search/content/_data-types-for-vector-search-csharp.mdx b/versioned_docs/version-7.1/ai-integration/vector-search/content/_data-types-for-vector-search-csharp.mdx
new file mode 100644
index 0000000000..6334e4665e
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/vector-search/content/_data-types-for-vector-search-csharp.mdx
@@ -0,0 +1,128 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Data for vector search can be stored in **raw** or **pre-quantized** formats using several data types,
+ as outlined below.
+
+* Text and numerical data that is not pre-quantized can be further quantized in the generated embeddings.
+ Learn more in [Quantization options](../../../ai-integration/vector-search/vector-search-using-dynamic-query.mdx#quantization-options).
+
+* In this article:
+ * [Supported data types for vector search](../../../ai-integration/vector-search/data-types-for-vector-search.mdx#supported-data-types-for-vector-search)
+ * [Textual data](../../../ai-integration/vector-search/data-types-for-vector-search.mdx#textual-data)
+ * [Numerical data](../../../ai-integration/vector-search/data-types-for-vector-search.mdx#numerical-data)
+ * [RavenVector](../../../ai-integration/vector-search/data-types-for-vector-search.mdx#ravenvector)
+
+
+
+## Supported data types for vector search
+
+### Textual data
+
+
+
+`string` - A single text entry.
+`string[]` - An array of text entries.
+
+
+
+### Numerical data
+
+* You can store **pre-generated** embedding vectors in your documents,
+ typically created by machine-learning models from text, images, or other sources.
+
+* When storing numerical embeddings in a document field:
+ * Ensure that all vectors within this field across all documents in the collection are generated by the **same model** and model version and have the **same dimensions**.
+ * Consistency in both dimensionality and model source is crucial for meaningful comparisons in the vector space.
+
+* In addition to the native types described below, we highly recommended using [RavenVector](../../../ai-integration/vector-search/data-types-for-vector-search.mdx#ravenvector)
+ for efficient storage and fast queries when working with numerical embeddings.
+
+
+
+**Raw embedding data**:
+Use when precision is critical.
+
+`float[]` - A single vector of numerical values representing raw embedding data.
+`float[][]`- An array of vectors, where each entry is a separate embedding vector.
+
+
+
+
+
+**Pre-quantized data**:
+Use when you prioritize storage efficiency and query speed.
+
+`byte[] / sbyte[]` - A single pre-quantized embedding vector in the _Int8_ or _Binary_ quantization format.
+`byte[][] / sbyte[][]` - An array of pre-quantized embedding vectors.
+
+When storing data in these formats in your documents, you should use [RavenDB’s vector quantizer methods](../../../ai-integration/vector-search/vector-search-using-dynamic-query.mdx#section-1).
+
+
+
+
+
+**Base64-encoded data**:
+Use when embedding data needs to be represented as a compact and easily serializable string format.
+
+`string` - A single vector encoded as a Base64 string.
+`string[]` - An array of Base64-encoded vectors.
+
+
+
+
+
+**Using lists**:
+
+While arrays (`float[]`) are the most direct representation of numerical embeddings,
+you can also use lists (for example, `List` or `List`) for dynamic sizing in your application code.
+
+
+
+## RavenVector
+
+RavenVector is RavenDB's dedicated data type for storing and querying **numerical embeddings**.
+It is highly optimized to minimize storage space and improve the speed of reading arrays from disk,
+making it ideal for vector search.
+
+For example, you can define:
+
+
+
+{`RavenVector; // A single vector of floating-point values.
+List>; // A collection of float-based vectors.
+RavenVector; // A single pre-quantized vector in Int8 format (8-bit signed integer).
+List>; // A collection of sbyte-based vectors.
+RavenVector; // A single pre-quantized vector in Binary format (8-bit unsigned integer).
+List>; // A collection of byte-based vectors.
+`}
+
+
+
+When a class property is stored as a `RavenVector`, the vector's content will appear under the `@vector` field in the JSON document stored in the database.
+For example:
+
+
+
+
+{`public class SampleClass
+{
+ public string Id { get; set; }
+ public string Title { get; set; }
+
+ // Storing data in a RavenVector property for optimized storage and performance
+ public RavenVector EmbeddingRavenVector { get; set; }
+
+ // Storing data in a regular array property
+ public float[] EmbeddingVector { get; set; }
+}
+`}
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/ai-integration/vector-search/content/_indexing-attachments-for-vector-search-csharp.mdx b/versioned_docs/version-7.1/ai-integration/vector-search/content/_indexing-attachments-for-vector-search-csharp.mdx
new file mode 100644
index 0000000000..86b063916a
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/vector-search/content/_indexing-attachments-for-vector-search-csharp.mdx
@@ -0,0 +1,960 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* This article explains how to index attachments using a **static-index** to enable vector search on their content.
+ Note: Vector search on attachment content is not available when making a [dynamic query](../../../ai-integration/vector-search/vector-search-using-dynamic-query.mdx).
+
+* **Prior to this article**, refer to the [Vector search using a static index](../../../ai-integration/vector-search/vector-search-using-static-index.mdx) article for general knowledge about
+ indexing a vector field.
+
+* In this article:
+ * [Overview](../../../ai-integration/vector-search/indexing-attachments-for-vector-search.mdx#overview)
+ * [Indexing TEXT attachments](../../../ai-integration/vector-search/indexing-attachments-for-vector-search.mdx#indexing-text-attachments)
+ * [Indexing NUMERICAL attachments](../../../ai-integration/vector-search/indexing-attachments-for-vector-search.mdx#indexing-numerical-attachments)
+ * [LINQ index](../../../ai-integration/vector-search/indexing-attachments-for-vector-search.mdx#linq-index)
+ * [JS index](../../../ai-integration/vector-search/indexing-attachments-for-vector-search.mdx#js-index)
+ * [Indexing ALL attachments](../../../ai-integration/vector-search/indexing-attachments-for-vector-search.mdx#indexing-all-attachments)
+
+
+
+## Overview
+
+
+
+#### Attachments in RavenDB
+
+* Attachments in RavenDB allow you to associate binary files with your JSON documents.
+ You can use attachments to store images, PDFs, videos, text files, or any other format.
+
+* Attachments are stored separately from documents, reducing document size and avoiding unnecessary duplication.
+ They are stored as **binary data**, regardless of content type.
+
+* Attachments are handled as streams, allowing efficient upload and retrieval.
+ Learn more in: [What are attachments](../../../document-extensions/attachments/what-are-attachments.mdx).
+
+
+
+
+
+#### Indexing attachment content for vector search
+
+You can index attachment content in a vector field within a static-index,
+enabling vector search on text or numerical data that is stored in the attachments:
+
+* **Attachments with TEXT**:
+ * During indexing, RavenDB processes the text into a single embedding per attachment using the built-in
+ [bge-micro-v2](https://huggingface.co/TaylorAI/bge-micro-v2) model.
+
+* **Attachments with NUMERICAL data**:
+ * While attachments can store any file type, RavenDB does Not generate embeddings from images, videos, or other non-textual content.
+ Each attachment must contain a **single** precomputed embedding vector, generated externally.
+ * RavenDB indexes the embedding vector from the attachment in and can apply [quantization](../../../ai-integration/vector-search/vector-search-using-dynamic-query.mdx#quantization-options)
+ (e.g., index it in _Int8_ format) if this is configured.
+ * All embeddings indexed within the same vector-field in the static-index must be vectors of the **same dimension** to ensure consistency in indexing and search.
+ They must also be created using the **same model**.
+
+
+
+## Indexing TEXT attachments
+
+* The following index defines a **vector field** named `VectorFromAttachment`.
+
+* It indexes embeddings generated from the text content of the `description.txt` attachment.
+ This applies to all _Company_ documents that contain an attachment with that name.
+
+
+
+
+{`public class Companies_ByVector_FromTextAttachment :
+ AbstractIndexCreationTask
+{
+ public class IndexEntry()
+ {
+ // This index-field will hold embeddings
+ // generated from the TEXT in the attachments.
+ public object VectorFromAttachment { get; set; }
+ }
+
+ public Companies_ByVector_FromTextAttachment()
+ {
+ Map = companies => from company in companies
+
+ // Load the attachment from the document (ensure it is not null)
+ let attachment = LoadAttachment(company, "description.txt")
+ where attachment != null
+
+ select new IndexEntry()
+ {
+ // Index the text content from the attachment in the vector field
+ VectorFromAttachment =
+ CreateVector(attachment.GetContentAsString(Encoding.UTF8))
+ };
+
+ // Configure the vector field:
+ VectorIndexes.Add(x => x.VectorFromAttachment,
+ new VectorOptions()
+ {
+ // Specify 'Text' as the source format
+ SourceEmbeddingType = VectorEmbeddingType.Text,
+ // Specify the desired destination format within the index
+ DestinationEmbeddingType = VectorEmbeddingType.Single
+ });
+
+ SearchEngineType = Raven.Client.Documents.Indexes.SearchEngineType.Corax;
+ }
+}
+`}
+
+
+
+
+{`public class Companies_ByVector_FromTextAttachment_JS :
+ AbstractJavaScriptIndexCreationTask
+{
+ public Companies_ByVector_FromTextAttachment_JS()
+ {
+ Maps = new HashSet
+ {
+ @"map('Companies', function (company) {
+
+ var attachment = loadAttachment(company, 'description.txt');
+ if (!attachment) return null;
+
+ return {
+ VectorFromAttachment: createVector(attachment.getContentAsString('utf8'))
+ };
+ })"
+ };
+
+ Fields = new Dictionary()
+ {
+ {
+ "VectorFromAttachment",
+ new IndexFieldOptions()
+ {
+ Vector = new()
+ {
+ SourceEmbeddingType = VectorEmbeddingType.Text,
+ DestinationEmbeddingType = VectorEmbeddingType.Single
+ }
+ }
+ }
+ };
+
+ SearchEngineType = Raven.Client.Documents.Indexes.SearchEngineType.Corax;
+ }
+}
+`}
+
+
+
+
+{`var indexDefinition = new IndexDefinition
+{
+ Name = "Companies/ByVector/FromTextAttachment",
+
+ Maps = new HashSet
+ {
+ @"from company in docs.Companies
+
+ let attachment = LoadAttachment(company, ""description.txt"")
+ where attachment != null
+
+ select new
+ {
+ VectorFromAttachment =
+ CreateVector(attachment.GetContentAsString(Encoding.UTF8))
+ }"
+ },
+
+ Fields = new Dictionary()
+ {
+ {
+ "VectorFromAttachment",
+ new IndexFieldOptions()
+ {
+ Vector = new VectorOptions()
+ {
+ SourceEmbeddingType = VectorEmbeddingType.Text,
+ DestinationEmbeddingType = VectorEmbeddingType.Single
+ }
+ }
+ }
+ },
+
+ Configuration = new IndexConfiguration()
+ {
+ ["Indexing.Static.SearchEngineType"] = "Corax"
+ }
+};
+
+store.Maintenance.Send(new PutIndexesOperation(indexDefinition));
+`}
+
+
+
+
+{`// Prepare text as \`byte[]\` to be stored as attachments:
+// =====================================================
+var byteArray1 = Encoding.UTF8.GetBytes(
+ "Supplies soft drinks, fruit juices, and flavored syrups to restaurants and retailers.");
+var byteArray2 = Encoding.UTF8.GetBytes(
+ "Supplies fine dining restaurants with premium meats, cheeses, and wines across France.");
+var byteArray3 = Encoding.UTF8.GetBytes(
+ "An American grocery chain known for its fresh produce, organic foods, and local meats.");
+var byteArray4 = Encoding.UTF8.GetBytes(
+ "An Asian grocery store specializing in ingredients for Japanese and Thai cuisine.");
+var byteArray5 = Encoding.UTF8.GetBytes(
+ "A rural general store offering homemade jams, fresh-baked bread, and locally crafted gifts.");
+
+using (var session = store.OpenSession())
+{
+ // Load existing Company documents from RavenDB's sample data:
+ // ===========================================================
+ var company1 = session.Load("companies/11-A");
+ var company2 = session.Load("companies/26-A");
+ var company3 = session.Load("companies/32-A");
+ var company4 = session.Load("companies/41-A");
+ var company5 = session.Load("companies/43-A");
+
+ // Store the attachments in the documents (using MemoryStream):
+ // ============================================================
+ session.Advanced.Attachments.Store(company1, "description.txt",
+ new MemoryStream(byteArray1), "text/plain");
+ session.Advanced.Attachments.Store(company2, "description.txt",
+ new MemoryStream(byteArray2), "text/plain");
+ session.Advanced.Attachments.Store(company3, "description.txt",
+ new MemoryStream(byteArray3), "text/plain");
+ session.Advanced.Attachments.Store(company4, "description.txt",
+ new MemoryStream(byteArray4), "text/plain");
+ session.Advanced.Attachments.Store(company5, "description.txt",
+ new MemoryStream(byteArray5), "text/plain");
+
+ session.SaveChanges();
+}
+`}
+
+
+
+
+Execute a vector search using the index:
+Results will include _Company_ documents whose attachment contains text similar to `"chinese food"`.
+
+
+
+
+{`var relevantCompanies = session
+ .Query()
+ .VectorSearch(
+ field => field
+ .WithField(x => x.VectorFromAttachment),
+ searchTerm => searchTerm
+ .ByText("chinese food"), 0.8f)
+ .Customize(x => x.WaitForNonStaleResults())
+ .OfType()
+ .ToList();
+`}
+
+
+
+
+{`var relevantCompanies = await asyncSession
+ .Query()
+ .VectorSearch(
+ field => field
+ .WithField(x => x.VectorFromAttachment),
+ searchTerm => searchTerm
+ .ByText("chinese food"), 0.8f)
+ .Customize(x => x.WaitForNonStaleResults())
+ .OfType()
+ .ToListAsync();
+`}
+
+
+
+
+{`var relevantCompanies = session.Advanced
+ .DocumentQuery()
+ .VectorSearch(
+ field => field
+ .WithField(x => x.VectorFromAttachment),
+ searchTerm => searchTerm
+ .ByText("chinese food"), 0.8f)
+ .WaitForNonStaleResults()
+ .OfType()
+ .ToList();
+`}
+
+
+
+
+{`var relevantCompanies = await asyncSession.Advanced
+ .AsyncDocumentQuery()
+ .VectorSearch(
+ field => field
+ .WithField(x => x.VectorFromAttachment),
+ searchTerm => searchTerm
+ .ByText("chinese food"), 0.8f)
+ .WaitForNonStaleResults()
+ .OfType()
+ .ToListAsync();
+`}
+
+
+
+
+{`var relevantCompanies = session.Advanced
+ .RawQuery(@"
+ from index 'Companies/ByVector/FromTextAttachment'
+ where vector.search(VectorFromAttachment, $searchTerm, 0.8)")
+ .AddParameter("searchTerm", "chinese food")
+ .WaitForNonStaleResults()
+ .ToList();
+`}
+
+
+
+
+{`var relevantCompanies = await asyncSession.Advanced
+ .AsyncRawQuery(@"
+ from index 'Companies/ByVector/FromTextAttachment'
+ where vector.search(VectorFromAttachment, $searchTerm, 0.8)")
+ .AddParameter("searchTerm", "chinese food")
+ .WaitForNonStaleResults()
+ .ToListAsync();
+`}
+
+
+
+
+{`from index "Companies/ByVector/FromTextAttachment"
+where vector.search(VectorFromAttachment, $searchTerm, 0.8)
+{ "searchTerm" : "chinese food" }
+`}
+
+
+
+
+You can now extract the text from the attachments of the resulting documents:
+
+
+
+{`// Extract text from the attachment of the first resulting document
+// ================================================================
+
+// Retrieve the attachment stream
+var company = relevantCompanies[0];
+var attachmentResult = session.Advanced.Attachments.Get(company, "description.txt");
+var attStream = attachmentResult.Stream;
+
+// Read the attachment content into memory and decode it as a UTF-8 string
+var ms = new MemoryStream();
+attStream.CopyTo(ms);
+string attachmentText = Encoding.UTF8.GetString(ms.ToArray());
+`}
+
+
+
+## Indexing NUMERICAL attachments
+
+### LINQ index
+
+* The following index defines a **vector field** named `VectorFromAttachment`.
+
+* It indexes embeddings generated from the numerical data stored in the `vector.raw` attachment.
+ This applies to all _Company_ documents that contain an attachment with that name.
+
+* Each attachment contains raw numerical data in 32-bit floating-point format.
+
+
+
+
+{`public class Companies_ByVector_FromNumericalAttachment :
+ AbstractIndexCreationTask
+{
+ public class IndexEntry()
+ {
+ // This index-field will hold embeddings
+ // generated from the NUMERICAL content in the attachments.
+ public object VectorFromAttachment { get; set; }
+ }
+
+ public Companies_ByVector_FromNumericalAttachment()
+ {
+ Map = companies => from company in companies
+
+ // Load the attachment from the document (ensure it is not null)
+ let attachment = LoadAttachment(company, "vector.raw")
+ where attachment != null
+
+ select new IndexEntry
+ {
+ // Index the attachment's content in the vector field
+ VectorFromAttachment = CreateVector(attachment.GetContentAsStream())
+ };
+
+ // Configure the vector field:
+ VectorIndexes.Add(x => x.VectorFromAttachment,
+ new VectorOptions()
+ {
+ // Define the source embedding type
+ SourceEmbeddingType = VectorEmbeddingType.Single,
+ // Define the desired destination format within the index
+ DestinationEmbeddingType = VectorEmbeddingType.Single
+ });
+
+ SearchEngineType = Raven.Client.Documents.Indexes.SearchEngineType.Corax;
+ }
+}
+`}
+
+
+
+
+{`var indexDefinition = new IndexDefinition
+{
+ Name = "Companies/ByVector/FromNumericalAttachment",
+
+ Maps = new HashSet
+ {
+ @"from company in docs.Companies
+
+ let attachment = LoadAttachment(company, ""vector.raw"")
+ where attachment != null
+
+ select new
+ {
+ VectorFromAttachment = CreateVector(attachment.GetContentAsStream())
+ }"
+ },
+
+ Fields = new Dictionary()
+ {
+ {
+ "VectorFromAttachment",
+ new IndexFieldOptions()
+ {
+ Vector = new VectorOptions()
+ {
+ SourceEmbeddingType = VectorEmbeddingType.Single,
+ DestinationEmbeddingType = VectorEmbeddingType.Single
+ }
+ }
+ }
+ },
+
+ Configuration = new IndexConfiguration()
+ {
+ ["Indexing.Static.SearchEngineType"] = "Corax"
+ }
+};
+
+store.Maintenance.Send(new PutIndexesOperation(indexDefinition));
+`}
+
+
+
+
+{`// These vectors are simple pre-computed embedding vectors with 32-bit floating-point values.
+// Note: In a real scenario, embeddings would be generated by a model.
+// ==========================================================================================
+var v1 = new float[] { 0.1f, 0.2f, 0.3f, 0.4f };
+var v2 = new float[] { 0.1f, 0.7f, 0.8f, 0.9f };
+var v3 = new float[] { 0.5f, 0.6f, 0.7f, 0.8f };
+
+// Prepare the embedding vectors as \`byte[]\` to be stored as attachments:
+// =====================================================================
+var byteArray1 = MemoryMarshal.Cast(v1).ToArray();
+var byteArray2 = MemoryMarshal.Cast(v2).ToArray();
+var byteArray3 = MemoryMarshal.Cast(v3).ToArray();
+
+using (var session = store.OpenSession())
+{
+ // Load existing Company documents from RavenDB's sample data:
+ // ===========================================================
+ var company1 = session.Load("companies/50-A");
+ var company2 = session.Load("companies/51-A");
+ var company3 = session.Load("companies/52-A");
+
+ // Store the attachments in the documents (using MemoryStream):
+ // ============================================================
+ session.Advanced.Attachments.Store(company1, "vector.raw", new MemoryStream(byteArray1));
+ session.Advanced.Attachments.Store(company2, "vector.raw", new MemoryStream(byteArray2));
+ session.Advanced.Attachments.Store(company3, "vector.raw", new MemoryStream(byteArray3));
+
+ session.SaveChanges();
+}
+`}
+
+
+
+
+Execute a vector search using the index:
+Results will include _Company_ documents whose attachment contains vectors similar to the query vector.
+
+
+
+
+{`var similarCompanies = session
+ .Query()
+ .VectorSearch(
+ field => field
+ .WithField(x => x.VectorFromAttachment),
+ queryVector => queryVector
+ .ByEmbedding(new float[] { 0.1f, 0.2f, 0.3f, 0.4f }))
+ .Customize(x => x.WaitForNonStaleResults())
+ .OfType()
+ .ToList();
+`}
+
+
+
+
+{`var similarCompanies = await asyncSession
+ .Query()
+ .VectorSearch(
+ field => field
+ .WithField(x => x.VectorFromAttachment),
+ queryVector => queryVector
+ .ByEmbedding(new float[] { 0.1f, 0.2f, 0.3f, 0.4f }))
+ .Customize(x => x.WaitForNonStaleResults())
+ .OfType()
+ .ToListAsync();
+`}
+
+
+
+
+{`var similarCompanies = session.Advanced
+ .DocumentQuery()
+ .VectorSearch(
+ field => field
+ .WithField(x => x.VectorFromAttachment),
+ queryVector => queryVector
+ .ByEmbedding(new float[] { 0.1f, 0.2f, 0.3f, 0.4f }))
+ .WaitForNonStaleResults()
+ .OfType()
+ .ToList();
+`}
+
+
+
+
+{`var similarCompanies = await asyncSession.Advanced
+ .AsyncDocumentQuery()
+ .VectorSearch(
+ field => field
+ .WithField(x => x.VectorFromAttachment),
+ queryVector => queryVector
+ .ByEmbedding(new float[] { 0.1f, 0.2f, 0.3f, 0.4f }))
+ .WaitForNonStaleResults()
+ .OfType()
+ .ToListAsync();
+`}
+
+
+
+
+{`var similarCompanies = session.Advanced
+ .RawQuery(@"
+ from index 'Companies/ByVector/FromNumericalAttachment'
+ where vector.search(VectorFromAttachment, $queryVector)")
+ .AddParameter("queryVector", new float[] { 0.1f, 0.2f, 0.3f, 0.4f })
+ .WaitForNonStaleResults()
+ .ToList();
+`}
+
+
+
+
+{`var similarCompanies = await asyncSession.Advanced
+ .AsyncRawQuery(@"
+ from index 'Companies/ByVector/FromNumericalAttachment'
+ where vector.search(VectorFromAttachment, $queryVector)")
+ .AddParameter("queryVector", new float[] { 0.1f, 0.2f, 0.3f, 0.4f })
+ .WaitForNonStaleResults()
+ .ToListAsync();
+`}
+
+
+
+
+{`from index "Companies/ByVector/FromNumericalAttachment"
+where vector.search(VectorFromAttachment, $queryVector)
+{ "queryVector" : [0.1, 0.2, 0.3, 0.4] }
+`}
+
+
+
+
+### JS index
+
+* The following is the JavaScript index format equivalent to the [LINQ index](../../../ai-integration/vector-search/indexing-attachments-for-vector-search.mdx#linq-index) shown above.
+
+* The main difference is that JavaScript indexes do Not support `getContentAsStream()` on attachment objects:
+ * Because of this, embedding vectors must be stored in attachments as **Base64-encoded strings**.
+ * Use `getContentAsString()` to retrieve the attachment content as a string, as shown in this example.
+
+
+
+
+{`public class Companies_ByVector_FromNumericalAttachment_JS :
+ AbstractJavaScriptIndexCreationTask
+{
+ public Companies_ByVector_FromNumericalAttachment_JS()
+ {
+ Maps = new HashSet()
+ {
+ @"map('Companies', function (company) {
+
+ var attachment = loadAttachment(company, 'vector_base64.raw');
+ if (!attachment) return null;
+
+ return {
+ VectorFromAttachment: createVector(attachment.getContentAsString('utf8'))
+ };
+ })"
+ };
+
+ Fields = new();
+ Fields.Add("VectorFromAttachment", new IndexFieldOptions()
+ {
+ Vector = new VectorOptions()
+ {
+ SourceEmbeddingType = VectorEmbeddingType.Single,
+ DestinationEmbeddingType = VectorEmbeddingType.Single
+ }
+ });
+
+ SearchEngineType = Raven.Client.Documents.Indexes.SearchEngineType.Corax;
+ }
+}
+`}
+
+
+
+
+{`// These vectors are simple pre-computed embedding vectors with 32-bit floating-point values.
+// Note: In a real scenario, embeddings would be generated by a model.
+// ==========================================================================================
+var v1 = new float[] { 0.1f, 0.2f, 0.3f, 0.4f };
+var v2 = new float[] { 0.1f, 0.7f, 0.8f, 0.9f };
+var v3 = new float[] { 0.5f, 0.6f, 0.7f, 0.8f };
+
+// Prepare the embedding vectors as a BASE64 string to be stored as attachments:
+// =============================================================================
+var base64ForV1 = Convert.ToBase64String(MemoryMarshal.Cast(v1));
+var base64ForV2 = Convert.ToBase64String(MemoryMarshal.Cast(v2));
+var base64ForV3 = Convert.ToBase64String(MemoryMarshal.Cast(v3));
+
+// Convert to byte[] for streaming:
+// ================================
+var byteArray1 = Encoding.UTF8.GetBytes(base64ForV1);
+var byteArray2 = Encoding.UTF8.GetBytes(base64ForV2);
+var byteArray3 = Encoding.UTF8.GetBytes(base64ForV3);
+
+using (var session = store.OpenSession())
+{
+ // Load existing Company documents from RavenDB's sample data:
+ // ===========================================================
+ var company1 = session.Load("companies/60-A");
+ var company2 = session.Load("companies/61-A");
+ var company3 = session.Load("companies/62-A");
+
+ // Store the attachments in the documents (using MemoryStream):
+ // ============================================================
+ session.Advanced.Attachments.Store(company1, "vector_base64.raw", new MemoryStream(byteArray1));
+ session.Advanced.Attachments.Store(company2, "vector_base64.raw", new MemoryStream(byteArray2));
+ session.Advanced.Attachments.Store(company3, "vector_base64.raw", new MemoryStream(byteArray3));
+
+ session.SaveChanges();
+}
+`}
+
+
+
+
+Execute a vector search using the index:
+Results will include _Company_ documents whose attachment contains vectors similar to the query vector.
+
+
+
+
+{`var similarCompanies = session.Advanced
+ .RawQuery(@"
+ from index 'Companies/ByVector/FromNumericalAttachment/JS'
+ where vector.search(VectorFromAttachment, $queryVector)")
+ .AddParameter("queryVector", new float[] { 0.1f, 0.2f, 0.3f, 0.4f })
+ .WaitForNonStaleResults()
+ .ToList();
+`}
+
+
+
+
+{`var similarCompanies = await asyncSession.Advanced
+ .AsyncRawQuery(@"
+ from index 'Companies/ByVector/FromNumericalAttachment/JS'
+ where vector.search(VectorFromAttachment, $queryVector)")
+ .AddParameter("queryVector", new float[] { 0.1f, 0.2f, 0.3f, 0.4f })
+ .WaitForNonStaleResults()
+ .ToListAsync();
+`}
+
+
+
+
+{`from index "Companies/ByVector/FromNumericalAttachment/JS"
+where vector.search(VectorFromAttachment, $queryVector)
+{ "queryVector" : [0.1, 0.2, 0.3, 0.4] }
+`}
+
+
+
+
+## Indexing ALL attachments
+
+* The following index defines a vector field named `VectorFromAttachment`.
+
+* It indexes embeddings generated from the numerical data stored in ALL attachments of all _Company_ documents.
+
+
+
+
+{`public class Companies_ByVector_AllAttachments :
+ AbstractIndexCreationTask
+{
+ public class IndexEntry()
+ {
+ // This index-field will hold embeddings
+ // generated from the NUMERICAL content of ALL attachments.
+ public object VectorFromAttachment { get; set; }
+ }
+
+ public Companies_ByVector_AllAttachments()
+ {
+ Map = companies => from company in companies
+
+ // Load ALL attachments from the document
+ let attachments = LoadAttachments(company)
+
+ select new IndexEntry
+ {
+ // Index the attachments content in the vector field
+ VectorFromAttachment = CreateVector(
+ attachments.Select(e => e.GetContentAsStream()))
+ };
+
+ // Configure the vector field:
+ VectorIndexes.Add(x => x.VectorFromAttachment,
+ new VectorOptions()
+ {
+ SourceEmbeddingType = VectorEmbeddingType.Single,
+ DestinationEmbeddingType = VectorEmbeddingType.Single
+ });
+
+ SearchEngineType = Raven.Client.Documents.Indexes.SearchEngineType.Corax;
+ }
+}
+`}
+
+
+
+
+{`var indexDefinition = new IndexDefinition
+{
+ Name = "Companies/ByVector/AllAttachments",
+
+ Maps = new HashSet
+ {
+ @"from company in docs.Companies
+
+ let attachments = LoadAttachments(company)
+
+ select new
+ {
+ VectorFromAttachment =
+ CreateVector(attachments.Select(e => e.GetContentAsStream()))
+ }"
+ },
+
+ Fields = new Dictionary()
+ {
+ {
+ "VectorFromAttachment",
+ new IndexFieldOptions()
+ {
+ Vector = new VectorOptions()
+ {
+ SourceEmbeddingType = VectorEmbeddingType.Single,
+ DestinationEmbeddingType = VectorEmbeddingType.Single,
+ }
+ }
+ }
+ },
+
+ Configuration = new IndexConfiguration()
+ {
+ ["Indexing.Static.SearchEngineType"] = "Corax"
+ }
+};
+
+store.Maintenance.Send(new PutIndexesOperation(indexDefinition));
+`}
+
+
+
+
+{`// These vectors are simple pre-computed embedding vectors with 32-bit floating-point values.
+// Note: In a real scenario, embeddings would be generated by a model.
+// ==========================================================================================
+var v1 = new float[] { 0.1f, 0.2f, 0.3f, 0.4f };
+var v2 = new float[] { 0.5f, 0.6f, 0.7f, 0.8f };
+
+var v3 = new float[] { -0.1f, 0.2f, -0.7f, -0.8f };
+var v4 = new float[] { 0.3f, -0.6f, 0.9f, -0.9f };
+
+// Prepare the embedding vectors as \`byte[]\` to be stored as attachments:
+// =====================================================================
+var byteArray1 = MemoryMarshal.Cast(v1).ToArray();
+var byteArray2 = MemoryMarshal.Cast(v2).ToArray();
+
+var byteArray3 = MemoryMarshal.Cast(v3).ToArray();
+var byteArray4 = MemoryMarshal.Cast(v4).ToArray();
+
+using (var session = store.OpenSession())
+{
+ // Load existing Company documents from RavenDB's sample data:
+ // ===========================================================
+ var company1 = session.Load("companies/70-A");
+ var company2 = session.Load("companies/71-A");
+
+ // Store multiple attachments in the documents (using MemoryStream):
+ // =================================================================
+
+ session.Advanced.Attachments.Store(company1, "vector1.raw", new MemoryStream(byteArray1));
+ session.Advanced.Attachments.Store(company1, "vector2.raw", new MemoryStream(byteArray2));
+
+ session.Advanced.Attachments.Store(company2, "vector1.raw", new MemoryStream(byteArray3));
+ session.Advanced.Attachments.Store(company2, "vector2.raw", new MemoryStream(byteArray4));
+
+ session.SaveChanges();
+}
+`}
+
+
+
+
+Execute a vector search using the index:
+Results will include Company documents whose attachments contains vectors similar to the query vector.
+
+
+
+
+{`var similarCompanies = session
+ .Query()
+ .VectorSearch(
+ field => field
+ .WithField(x => x.VectorFromAttachment),
+ queryVector => queryVector
+ .ByEmbedding(new float[] { -0.1f, 0.2f, -0.7f, -0.8f }))
+ .Customize(x => x.WaitForNonStaleResults())
+ .OfType()
+ .ToList();
+`}
+
+
+
+
+{`var similarCompanies = await asyncSession
+ .Query()
+ .VectorSearch(
+ field => field
+ .WithField(x => x.VectorFromAttachment),
+ queryVector => queryVector
+ .ByEmbedding(new float[] { -0.1f, 0.2f, -0.7f, -0.8f }))
+ .Customize(x => x.WaitForNonStaleResults())
+ .OfType()
+ .ToListAsync();
+`}
+
+
+
+
+{`var similarCompanies = session.Advanced
+ .DocumentQuery()
+ .VectorSearch(
+ field => field
+ .WithField(x => x.VectorFromAttachment),
+ queryVector => queryVector
+ .ByEmbedding(new float[] { -0.1f, 0.2f, -0.7f, -0.8f }))
+ .WaitForNonStaleResults()
+ .OfType()
+ .ToList();
+`}
+
+
+
+
+{`var similarCompanies = await asyncSession.Advanced
+ .AsyncDocumentQuery()
+ .VectorSearch(
+ field => field
+ .WithField(x => x.VectorFromAttachment),
+ queryVector => queryVector
+ .ByEmbedding(new float[] { -0.1f, 0.2f, -0.7f, -0.8f }))
+ .WaitForNonStaleResults()
+ .OfType()
+ .ToListAsync();
+`}
+
+
+
+
+{`var similarCompanies = session.Advanced
+ .RawQuery(@"
+ from index 'Companies/ByVector/AllAttachments'
+ where vector.search(VectorFromAttachment, $queryVector)")
+ .AddParameter("queryVector", new float[] { 0.1f, 0.2f, -0.7f, -0.8f })
+ .WaitForNonStaleResults()
+ .ToList();
+`}
+
+
+
+
+{`var similarCompanies = await asyncSession.Advanced
+ .AsyncRawQuery(@"
+ from index 'Companies/ByVector/AllAttachments'
+ where vector.search(VectorFromAttachment, $queryVector)")
+ .AddParameter("queryVector", new float[] { 0.1f, 0.2f, -0.7f, -0.8f })
+ .WaitForNonStaleResults()
+ .ToListAsync();
+`}
+
+
+
+
+{`from index "Companies/ByVector/AllAttachments"
+where vector.search(VectorFromAttachment, $queryVector)
+{ "queryVector" : [0.1, 0.2, -0.7, -0.8] }
+`}
+
+
+
diff --git a/versioned_docs/version-7.1/ai-integration/vector-search/content/_vector-search-using-dynamic-query-csharp.mdx b/versioned_docs/version-7.1/ai-integration/vector-search/content/_vector-search-using-dynamic-query-csharp.mdx
new file mode 100644
index 0000000000..4001ed9b44
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/vector-search/content/_vector-search-using-dynamic-query-csharp.mdx
@@ -0,0 +1,1806 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* This article explains how to run a vector search using a **dynamic query**.
+ To learn how to run a vector search using a static-index, see [vector search using a static-index](../../../ai-integration/vector-search/vector-search-using-static-index.mdx).
+
+* In this article:
+ * [What is a vector search](../../../ai-integration/vector-search/vector-search-using-dynamic-query.mdx#what-is-a-vector-search)
+ * [Dynamic vector search - query overview](../../../ai-integration/vector-search/vector-search-using-dynamic-query.mdx#dynamic-vector-search---query-overview)
+ * [Creating embeddings for the auto-index](../../../ai-integration/vector-search/vector-search-using-dynamic-query.mdx#creating-embeddings-for-the-auto-index)
+ * [Retrieving results](../../../ai-integration/vector-search/vector-search-using-dynamic-query.mdx#retrieving-results)
+ * [The dynamic query parameters](../../../ai-integration/vector-search/vector-search-using-dynamic-query.mdx#the-dynamic-query-parameters)
+ * [Corax auto-indexes](../../../ai-integration/vector-search/vector-search-using-dynamic-query.mdx#corax-auto-indexes)
+ * [Dynamic vector search - querying TEXT](../../../ai-integration/vector-search/vector-search-using-dynamic-query.mdx#dynamic-vector-search---querying-text)
+ * [Querying raw text](../../../ai-integration/vector-search/vector-search-using-dynamic-query.mdx#querying-raw-text)
+ * [Querying pre-made embeddings generated by tasks](../../../ai-integration/vector-search/vector-search-using-dynamic-query.mdx#querying-pre-made-embeddings-generated-by-tasks)
+ * [Dynamic vector search - querying NUMERICAL content](../../../ai-integration/vector-search/vector-search-using-dynamic-query.mdx#dynamic-vector-search---querying-numerical-content)
+ * [Dynamic vector search - querying for similar documents](../../../ai-integration/vector-search/vector-search-using-dynamic-query.mdx#dynamic-vector-search---querying-for-similar-documents)
+ * [Dynamic vector search - exact search](../../../ai-integration/vector-search/vector-search-using-dynamic-query.mdx#dynamic-vector-search---exact-search)
+ * [Quantization options](../../../ai-integration/vector-search/vector-search-using-dynamic-query.mdx#quantization-options)
+ * [Querying vector fields and regular data in the same query](../../../ai-integration/vector-search/vector-search-using-dynamic-query.mdx#querying-vector-fields-and-regular-data-in-the-same-query)
+ * [Combining multiple vector searches in the same query](../../../ai-integration/vector-search/vector-search-using-dynamic-query.mdx#combining-multiple-vector-searches-in-the-same-query)
+ * [Syntax](../../../ai-integration/vector-search/vector-search-using-dynamic-query.mdx#syntax)
+
+
+
+## What is a vector search
+
+* Vector search is a method for finding documents based on their **contextual similarity** to the search item provided in a given query.
+
+* Your data is converted into vectors, known as **embeddings**, and stored in a multidimensional space.
+ Unlike traditional keyword-based searches, which rely on exact matches,
+ vector search identifies vectors closest to your query vector and retrieves the corresponding documents.
+
+## Dynamic vector search - query overview
+
+
+
+#### Overview
+
+* A dynamic vector search query can be performed on:
+ * Raw text stored in your documents.
+ * Pre-made embeddings that you created yourself and stored using these [Data types](../../../ai-integration/vector-search/data-types-for-vector-search.mdx).
+ * Pre-made embeddings that are automatically generated from your document content
+ by RavenDB's [Embeddings generation tasks](../../../ai-integration/generating-embeddings/overview.mdx) using external service providers.
+
+* Note: Vector search queries cannot be used with [Subscription queries](../../../client-api/data-subscriptions/creation/api-overview.mdx#subscription-query).
+
+* When executing a dynamic vector search query, RavenDB creates a [Corax Auto-Index](../../../ai-integration/vector-search/vector-search-using-dynamic-query.mdx#corax-auto-indexes) to process the query,
+ and the results are retrieved from that index.
+
+* To make a **dynamic vector search query**:
+ * From the Client API - use method `VectorSearch()`
+ * In RQL - use method `vector.search()`
+ * Examples are provided below
+
+
+
+
+
+#### Creating embeddings for the Auto-index
+
+* **Creating embeddings from TEXTUAL content**:
+
+ * **Pre-made embeddings via tasks**:
+ Embeddings can be created from textual content in your documents by defining [Tasks that generate embeddings](../../../ai-integration/generating-embeddings/overview.mdx).
+ When performing a dynamic vector search query over textual data and explicitly specifying the task,
+ results will be retrieved by comparing your search term against the embeddings previously generated by that task.
+ A query example is available in: [Querying pre-made embeddings generated by tasks](../../../ai-integration/vector-search/vector-search-using-dynamic-query.mdx#querying-pre-made-embeddings-generated-by-tasks).
+
+ * **Default embeddings generation**:
+ When querying textual data without specifying a task, RavenDB generates an embedding vector for the specified document field in each document of the queried collection,
+ using the built-in [bge-micro-v2](https://huggingface.co/TaylorAI/bge-micro-v2) sentence-transformer model.
+ A query example is available in: [Querying raw text](../../../ai-integration/vector-search/vector-search-using-dynamic-query.mdx#querying-raw-text).
+
+* **Creating embeddings from NUMERICAL arrays**:
+ When querying over pre-made numerical arrays that are already in vector format,
+ RavenDB will index them without transformation (unless further quantization is applied).
+ A query example is available in: [Vector search on numerical content](../../../ai-integration/vector-search/vector-search-using-dynamic-query.mdx#dynamic-vector-search---querying-numerical-content).
+
+ To avoid index errors, ensure that the dimensionality of these numerical arrays (i.e., their length)
+ is consistent across all your source documents for the field you are querying.
+ If you wish to enforce such consistency -
+ perform a vector search using a [Static-index](../../../ai-integration/vector-search/vector-search-using-static-index.mdx) instead of a dynamic query.
+
+
+* **Quantizing the embeddings**:
+ The embeddings are quantized based on the parameters specified in the query.
+ Learn more about quantization in [Quantization options](../../../ai-integration/vector-search/vector-search-using-dynamic-query.mdx#quantization-options).
+
+* **Indexing the embeddings**:
+ RavenDB indexes the embeddings on the server using the [HNSW algorithm](https://en.wikipedia.org/wiki/Hierarchical_navigable_small_world).
+ This algorithm organizes embeddings into a high-dimensional graph structure,
+ enabling efficient retrieval of Approximate Nearest Neighbors (ANN) during queries.
+
+
+
+
+
+#### Retrieving results
+
+* **Processing the query**:
+ To ensure consistent comparisons, the **search term** is transformed into an embedding vector using the same method as the document fields.
+ The server will search for the most similar vectors in the indexed vector space,
+ taking into account all the [query parameters](../../../ai-integration/vector-search/vector-search-using-dynamic-query.mdx#the-dynamic-query-parameters) described below.
+ The documents that correspond to the resulting vectors are then returned to the client.
+
+* **Search results**:
+ By default, the resulting documents will be ordered by their score.
+ You can modify this behavior using the [Indexing.Corax.VectorSearch.OrderByScoreAutomatically](../../../server/configuration/indexing-configuration.mdx#indexingcoraxvectorsearchorderbyscoreautomatically) configuration key.
+ In addition, you can apply any of the 'order by' methods to your query, as explained in [sort query results](../../../client-api/session/querying/sort-query-results.mdx).
+
+
+
+
+
+#### The dynamic query parameters
+
+* **Source data format**
+ RavenDB supports performing vector search on TEXTUAL values or NUMERICAL arrays.
+ the source data can be formatted as `Text`, `Single`, `Int8`, or `Binary`.
+
+* **Target quantization**
+ You can specify the quantization encoding for the embeddings that will be created from source data.
+ Learn more about quantization in [Quantization options](../../../ai-integration/vector-search/vector-search-using-dynamic-query.mdx#quantization-options).
+
+* **Minimum similarity**
+ You can specify the minimum similarity to use when searching for related vectors.
+ The value can be between `0.0f` and `1.0f`.
+ * A value closer to `1.0f` requires higher similarity between vectors,
+ while a value closer to `0.0f` allows for less similarity.
+ * **Important**: To filter out less relevant results when performing vector search queries,
+ it is recommended to explicitly specify the minimum similarity level at query time.
+
+ If not specified, the default value is taken from the following configuration key:
+ [Indexing.Corax.VectorSearch.DefaultMinimumSimilarity](../../../server/configuration/indexing-configuration.mdx#indexingcoraxvectorsearchdefaultminimumsimilarity).
+
+* **Number of candidates**
+ You can specify the maximum number of vectors that RavenDB will return from a graph search.
+ The number of the resulting documents that correspond to these vectors may be:
+ * lower than the number of candidates - when multiple vectors originated from the same document.
+ * higher than the number of candidates - when the same vector is shared between multiple documents.
+
+ If not specified, the default value is taken from the following configuration key:
+ [Indexing.Corax.VectorSearch.DefaultNumberOfCandidatesForQuerying](../../../server/configuration/indexing-configuration.mdx#indexingcoraxvectorsearchdefaultnumberofcandidatesforquerying).
+
+* **Search method**
+ * _Approximate Nearest-Neighbor search_ (Default):
+ Search for related vectors in an approximate manner, providing faster results.
+ * _Exact search_:
+ Perform a thorough scan of the vectors to find the actual closest vectors,
+ offering better accuracy but at a higher computational cost.
+ Learn more in [Exact search](../../../ai-integration/vector-search/vector-search-using-dynamic-query.mdx#dynamic-vector-search---exact-search).
+
+
+
+
+
+#### Corax auto-indexes
+
+* Only [Corax indexes](../../../indexes/search-engine/corax.mdx) support vector search.
+
+* Even if your **default auto-index engine** is set to Lucene (via [Indexing.Auto.SearchEngineType](../../../server/configuration/indexing-configuration.mdx#indexingautosearchenginetype)),
+ performing a vector search using a dynamic query will create a new auto-index based on Corax.
+
+* Normally, new dynamic queries extend existing [auto-indexes](../../../client-api/session/querying/how-to-query.mdx#queries-always-provide-results-using-an-index) if they require additional fields.
+ However, a dynamic query with a vector search will not extend an existing Lucene-based auto-index.
+
+
+ For example, suppose you have an existing **Lucene**-based auto-index on the Employees collection: e.g.:
+ `Auto/Employees/ByFirstName`.
+
+ Now, you run a query that:
+
+ * searches for Employees by _LastName_ (a regular text search)
+ * and performs a vector search over the _Notes_ field.
+
+ The following new **Corax**-based auto-index will be created:
+ `Auto/Employees/ByLastNameAndVector.search(embedding.text(Notes))`,
+ and the existing **Lucene** index on Employees will not be deleted or extended.
+
+
+
+
+## Dynamic vector search - querying TEXT
+
+### Querying raw text
+
+* The following example searches for Product documents where the _'Name'_ field is similar to the search term `"italian food"`.
+
+* Since the query does Not specify an [Embeddings generation task](../../../ai-integration/generating-embeddings/overview.mdx),
+ RavenDB dynamically generates embedding vectors for the _'Name'_ field of each document in the queried collection using the built-in
+ [bge-micro-v2](https://huggingface.co/TaylorAI/bge-micro-v2) text-embedding model.
+ The generated embeddings are indexed within the auto-index.
+ Unlike embeddings pre-made by tasks, this process does not create dedicated collections for storing embeddings.
+
+* Since this query does not specify a target quantization format,
+ the generated embedding vectors will be encoded in the default _Single_ format (single-precision floating-point).
+ Refer to [Quantization options](../../../ai-integration/vector-search/vector-search-using-dynamic-query.mdx#quantization-options) for examples that specify the destination quantization.
+
+
+
+```csharp
+var similarProducts = session.Query()
+ // Perform a vector search
+ // Call the 'VectorSearch' method
+ .VectorSearch(
+ // Call 'WithText'
+ // Specify the document field in which to search for similar values
+ field => field.WithText(x => x.Name),
+ // Call 'ByText'
+ // Provide the term for the similarity comparison
+ searchTerm => searchTerm.ByText("italian food"),
+ // It is recommended to specify the minimum similarity level
+ 0.82f,
+ // Optionally, specify the number of candidates for the search
+ 20)
+ // Waiting for not-stale results is not mandatory
+ // but will assure results are not stale
+ .Customize(x => x.WaitForNonStaleResults())
+ .ToList();
+```
+
+
+```csharp
+var similarProducts = await asyncSession.Query()
+ .VectorSearch(
+ field => field.WithText(x => x.Name),
+ searchTerm => searchTerm.ByText("italian food"),
+ 0.82f,
+ 20)
+ .Customize(x => x.WaitForNonStaleResults())
+ .ToListAsync();
+```
+
+
+```csharp
+var similarProducts = session.Advanced
+ .DocumentQuery()
+ .VectorSearch(
+ field => field.WithText(x => x.Name),
+ searchTerm => searchTerm.ByText("italian food"),
+ 0.82f,
+ 20)
+ .WaitForNonStaleResults()
+ .ToList();
+```
+
+
+```csharp
+var similarProducts = await asyncSession.Advanced
+ .AsyncDocumentQuery()
+ .VectorSearch(
+ field => field.WithText(x => x.Name),
+ searchTerm => searchTerm.ByText("italian food"),
+ 0.82f,
+ 20)
+ .WaitForNonStaleResults()
+ .ToListAsync();
+```
+
+
+```csharp
+var similarProducts = session.Advanced
+ .RawQuery(@"
+ from 'Products'
+ // Wrap the document field 'Name' with 'embedding.text' to indicate the source data type
+ where vector.search(embedding.text(Name), $searchTerm, 0.82, 20)")
+ .AddParameter("searchTerm", "italian food")
+ .WaitForNonStaleResults()
+ .ToList();
+```
+
+
+```csharp
+var similarProducts = await asyncSession.Advanced
+ .AsyncRawQuery(@"
+ from 'Products'
+ // Wrap the document field 'Name' with 'embedding.text' to indicate the source data type
+ where vector.search(embedding.text(Name), $searchTerm, 0.82, 20)")
+ .AddParameter("searchTerm", "italian food")
+ .WaitForNonStaleResults()
+ .ToListAsync();
+```
+
+
+```sql
+// Query the Products collection
+from "Products"
+// Call 'vector.search'
+// Wrap the document field 'Name' with 'embedding.text' to indicate the source data type
+where vector.search(embedding.text(Name), "italian food", 0.82, 20)
+```
+
+
+
+* Executing the above query on the RavenDB sample data will create the following **auto-index**:
+ `Auto/Products/ByVector.search(embedding.text(Name))`
+
+ 
+
+* Running the same query at a lower similarity level will return more results related to _"Italian food"_ but they may be less similar:
+
+ 
+
+### Querying pre-made embeddings generated by tasks
+
+* The following example searches for Category documents where the _'Name'_ field is similar to the search term `"candy"`.
+
+* The query explicitly specifies the **identifier** of the embeddings generation task that was defined in
+ [this example](../../../ai-integration/generating-embeddings/embeddings-generation-task.mdx#configuring-an-embeddings-generation-task---from-the-studio).
+ An `InvalidQueryException` will be thrown if no embeddings generation task with the specified identifier exists.
+
+ To avoid this error, you can verify that the specified embeddings generation task exists before issuing the query.
+ Refer to [Get embeddings generation task details](../ai-integration/generating-embeddings/overview#get-embeddings-generation-task-details)
+ to learn how to programmatically check which tasks are defined
+ and what their identifiers are.
+
+* Results are retrieved by comparing the search term against the pre-made embeddings generated by the specified task,
+ which are stored in the [Embedding collections](../../../ai-integration/generating-embeddings/embedding-collections.mdx).
+ To ensure consistent comparisons, the search term is transformed into an embedding using the same embeddings generation task.
+
+
+
+```csharp
+var similarCategories = session.Query()
+ .VectorSearch(
+ field => field
+ // Call 'WithText'
+ // Specify the document field in which to search for similar values
+ .WithText(x => x.Name)
+ // Call 'UsingTask'
+ // Specify the identifier of the task that generated
+ // the embeddings for the Name field
+ .UsingTask("id-for-task-open-ai"),
+ // Call 'ByText'
+ // Provide the search term for the similarity comparison
+ searchTerm => searchTerm.ByText("candy"),
+ // It is recommended to specify the minimum similarity level
+ 0.75f)
+ .Customize(x => x.WaitForNonStaleResults())
+ .ToList();
+```
+
+
+```csharp
+var similarCategories = await asyncSession.Query()
+ .VectorSearch(
+ field => field
+ .WithText(x => x.Name)
+ .UsingTask("id-for-task-open-ai"),
+ searchTerm => searchTerm.ByText("candy"),
+ 0.75f)
+ .Customize(x => x.WaitForNonStaleResults())
+ .ToListAsync();
+```
+
+
+```csharp
+var similarCategories = session.Advanced
+ .DocumentQuery()
+ .VectorSearch(
+ field => field
+ .WithText(x => x.Name)
+ .UsingTask("id-for-task-open-ai"),
+ searchTerm => searchTerm.ByText("candy"),
+ 0.75f)
+ .WaitForNonStaleResults()
+ .ToList();
+```
+
+
+```csharp
+var similarCategories = await asyncSession.Advanced
+ .AsyncDocumentQuery()
+ .VectorSearch(
+ field => field
+ .WithText(x => x.Name)
+ .UsingTask("id-for-task-open-ai"),
+ searchTerm => searchTerm.ByText("candy"),
+ 0.75f)
+ .WaitForNonStaleResults()
+ .ToListAsync();
+```
+
+
+```csharp
+var similarCategories = session.Advanced
+ .RawQuery(@"
+ from 'Categories'
+ // Specify the identifier of the task that generated the embeddings inside 'ai.task'
+ where vector.search(embedding.text(Name, ai.task('id-for-task-open-ai')), $searchTerm, 0.75)")
+ .AddParameter("searchTerm", "candy")
+ .WaitForNonStaleResults()
+ .ToList();
+```
+
+
+```csharp
+var similarCategories = await asyncSession.Advanced
+ .AsyncRawQuery(@"
+ from 'Categories'
+ // Specify the identifier of the task that generated the embeddings inside 'ai.task'
+ where vector.search(embedding.text(Name, ai.task('id-for-task-open-ai')), $searchTerm, 0.75)")
+ .AddParameter("searchTerm", "candy")
+ .WaitForNonStaleResults()
+ .ToListAsync();
+```
+
+
+```sql
+// Query the Categories collection
+from "Categories"
+// Call 'vector.search'
+// Specify the identifier of the task that generated the embeddings inside the 'ai.task' method
+where vector.search(embedding.text(Name, ai.task('id-for-task-open-ai')), $searchTerm, 0.75)
+{"searchTerm": "candy"}
+```
+
+
+
+* Executing the above query on the RavenDB sample data will create the following **auto-index**:
+ `Auto/Categories/ByVector.search(embedding.text(Name|ai.task('id-for-task-open-ai')))`
+
+## Dynamic vector search - querying NUMERICAL content
+
+* The following examples will use the sample data shown below.
+ The _Movie_ class includes various formats of numerical vector data.
+ Note: This sample data is minimal to keep the examples simple.
+
+* Note the usage of RavenDB's dedicated data type, [RavenVector](../../../ai-integration/vector-search/data-types-for-vector-search.mdx#ravenvector), which is highly optimized for reading and writing arrays to disk.
+ Learn more about the source data types suitable for vector search in [Data types for vector search](../../../ai-integration/vector-search/data-types-for-vector-search.mdx).
+
+* Unlike vector searches on text, where RavenDB transforms the raw text into an embedding vector,
+ numerical vector searches require your source data to already be in an embedding vector format.
+
+* If your raw data is in a _float_ format, you can request further quantization of the embeddings that will be indexed in the auto-index.
+ See an example of this in: [Quantization options](../../../ai-integration/vector-search/vector-search-using-dynamic-query.mdx#quantization-options).
+
+* Raw data that is already formatted as _Int8_ or _Binary_ **cannot** be quantized to lower-form (e.g. Int8 -> Int1).
+ When storing data in these formats in your documents, you should use [RavenDB’s `vectorQuantizer` methods](../../../ai-integration/vector-search/vector-search-using-dynamic-query.mdx#vectorquanitzer).
+
+#### Sample data:
+
+
+
+```csharp
+// Sample class representing a document with various formats of numerical vectors
+// The embedding vectors for these fields here are generated externally by you (not by RavenDB).
+public class Movie
+{
+ public string Id { get; set; }
+ public string Title { get; set; }
+
+ // This field will hold numerical vector data - Not quantized
+ public RavenVector TagsEmbeddedAsSingle { get; set; }
+
+ // This field will hold numerical vector data - Quantized to Int8
+ public sbyte[][] TagsEmbeddedAsInt8 { get; set; }
+
+ // This field will hold numerical vector data - Encoded in Base64 format
+ public List TagsEmbeddedAsBase64 { get; set; }
+
+ // A field for holding a numerical vector data produced by a multimodal model
+ // that converts an image into an embedding
+ public RavenVector MoviePhotoEmbedding { get; set; }
+}
+```
+
+
+```csharp
+using (var session = store.OpenSession())
+{
+ var movie1 = new Movie()
+ {
+ Title = "Hidden Figures",
+ Id = "movies/1",
+
+ // Embedded vector represented as float values
+ TagsEmbeddedAsSingle = new RavenVector(new float[]
+ {
+ 6.599999904632568f, 7.699999809265137f
+ }),
+
+ // Embedded vectors encoded in Base64 format
+ TagsEmbeddedAsBase64 = new List()
+ {
+ "zczMPc3MTD6amZk+", "mpmZPs3MzD4AAAA/"
+ },
+
+ // Array of embedded vectors quantized to Int8
+ TagsEmbeddedAsInt8 = new sbyte[][]
+ {
+ // Use RavenDB's quantization methods to convert float vectors to Int8
+ VectorQuantizer.ToInt8(new float[] { 0.1f, 0.2f }),
+ VectorQuantizer.ToInt8(new float[] { 0.3f, 0.4f })
+ },
+
+ // Example of an image embedding
+ // In a real scenario, this vector would come from a multimodal model
+ // such as CLIP, OpenCLIP, or similar
+ MoviePhotoEmbedding = new RavenVector(new float[]
+ {
+ 0.123f, -0.045f, 0.987f, 0.564f, -0.321f, 0.220f
+ })
+ };
+
+ var movie2 = new Movie()
+ {
+ Title = "The Shawshank Redemption",
+ Id = "movies/2",
+
+ TagsEmbeddedAsSingle =new RavenVector(new float[]
+ {
+ 8.800000190734863f, 9.899999618530273f
+ }),
+ TagsEmbeddedAsBase64 = new List() {"zcxMPs3MTD9mZmY/", "zcxMPpqZmT4zMzM/"},
+ TagsEmbeddedAsInt8 = new sbyte[][]
+ {
+ VectorQuantizer.ToInt8(new float[] { 0.5f, 0.6f }),
+ VectorQuantizer.ToInt8(new float[] { 0.7f, 0.8f })
+ },
+
+ MoviePhotoEmbedding = new RavenVector(new float[]
+ {
+ 0.456f, -0.056f, 0.123f, 0.899f, -0.765f, 0.881f
+ })
+ };
+
+ session.Store(movie1);
+ session.Store(movie2);
+ session.SaveChanges();
+}
+```
+
+
+```csharp
+{
+ "Title": "Hidden Figures",
+
+ "TagsEmbeddedAsSingle": {
+ "@vector": [
+ 6.599999904632568,
+ 7.699999809265137
+ ]
+ },
+
+ "TagsEmbeddedAsInt8": [
+ [
+ 64,
+ 127,
+ -51,
+ -52,
+ 76,
+ 62
+ ],
+ [
+ 95,
+ 127,
+ -51,
+ -52,
+ -52,
+ 62
+ ]
+ ],
+
+ "TagsEmbeddedAsBase64": [
+ "zczMPc3MTD6amZk+",
+ "mpmZPs3MzD4AAAA/"
+ ],
+
+ "MoviePhotoEmbedding": {
+ "@vector": [0.123, -0.045, 0.987, 0.564, -0.321, 0.220]
+ }
+
+ "@metadata": {
+ "@collection": "Movies"
+ }
+}
+```
+
+
+
+#### Examples:
+
+These examples search for Movie documents with vectors similar to the one provided in the query.
+
+
+
+* Search on the `TagsEmbeddedAsSingle` field,
+ which contains numerical data in **floating-point format**.
+
+
+
+```csharp
+var similarMovies = session.Query()
+ // Perform a vector search
+ // Call the 'VectorSearch' method
+ .VectorSearch(
+ // Call 'WithEmbedding', specify:
+ // * The source field that contains the embedding in the document
+ // * The source embedding type
+ field => field.WithEmbedding(
+ x => x.TagsEmbeddedAsSingle, VectorEmbeddingType.Single),
+ // Call 'ByEmbedding'
+ // Provide the vector for the similarity comparison
+ queryVector => queryVector.ByEmbedding(
+ new RavenVector(new float[] { 6.599999904632568f, 7.699999809265137f })),
+ // It is recommended to specify the minimum similarity level
+ 0.85f,
+ // Optionally, specify the number of candidates for the search
+ 10)
+ .Customize(x => x.WaitForNonStaleResults())
+ .ToList();
+```
+
+
+```csharp
+var similarMovies = await asyncSession.Query()
+ .VectorSearch(
+ field => field.WithEmbedding(
+ x => x.TagsEmbeddedAsSingle, VectorEmbeddingType.Single),
+ queryVector => queryVector.ByEmbedding(
+ new RavenVector(new float[] { 6.599999904632568f, 7.699999809265137f })),
+ 0.85f,
+ 10)
+ .Customize(x => x.WaitForNonStaleResults())
+ .ToListAsync();
+```
+
+
+```csharp
+var similarMovies = session.Advanced
+ .DocumentQuery()
+ .VectorSearch(
+ field => field.WithEmbedding(
+ x => x.TagsEmbeddedAsSingle, VectorEmbeddingType.Single),
+ queryVector => queryVector.ByEmbedding(
+ new RavenVector(new float[] { 6.599999904632568f, 7.699999809265137f })),
+ 0.85f,
+ 10)
+ .WaitForNonStaleResults()
+ .ToList();
+```
+
+
+```csharp
+var similarMovies = await asyncSession.Advanced
+ .AsyncDocumentQuery()
+ .VectorSearch(
+ field => field.WithEmbedding(
+ x => x.TagsEmbeddedAsSingle, VectorEmbeddingType.Single),
+ queryVector => queryVector.ByEmbedding(
+ new RavenVector(new float[] { 6.599999904632568f, 7.699999809265137f })),
+ 0.85f,
+ 10)
+ .WaitForNonStaleResults()
+ .ToListAsync();
+```
+
+
+```csharp
+var similarProducts = session.Advanced
+ .RawQuery(@"
+ from 'Movies'
+ where vector.search(TagsEmbeddedAsSingle, $queryVector, 0.85, 10)")
+ .AddParameter("queryVector", new RavenVector(new float[]
+ {
+ 6.599999904632568f, 7.699999809265137f
+ }))
+ .WaitForNonStaleResults()
+ .ToList();
+```
+
+
+```csharp
+var similarProducts = await asyncSession.Advanced
+ .AsyncRawQuery(@"
+ from 'Movies'
+ where vector.search(TagsEmbeddedAsSingle, $queryVector, 0.85, 10)")
+ .AddParameter("queryVector", new RavenVector(new float[]
+ {
+ 6.599999904632568f, 7.699999809265137f
+ }))
+ .WaitForNonStaleResults()
+ .ToListAsync();
+```
+
+
+```sql
+from "Movies"
+// The source document field type is interpreted as 'Single' by default
+where vector.search(TagsEmbeddedAsSingle, $queryVector, 0.85, 10)
+{ "queryVector" : { "@vector" : [6.599999904632568, 7.699999809265137] }}
+```
+
+
+
+
+
+
+* Search on the `TagsEmbeddedAsInt8` field,
+ which contains numerical data that is already quantized in **_Int8_ format**.
+
+
+
+```csharp
+var similarMovies = session.Query()
+ .VectorSearch(
+ // Call 'WithEmbedding', specify:
+ // * The source field that contains the embeddings in the document
+ // * The source embedding type
+ field => field.WithEmbedding(
+ x => x.TagsEmbeddedAsInt8, VectorEmbeddingType.Int8),
+ // Call 'ByEmbedding'
+ // Provide the vector for the similarity comparison
+ // (provide a single vector from the vector list in the TagsEmbeddedAsInt8 field)
+ queryVector => queryVector.ByEmbedding(
+ // The provided vector MUST be in the same format as was stored in your document
+ // Call 'VectorQuantizer.ToInt8' to transform the raw data to the Int8 format
+ VectorQuantizer.ToInt8(new float[] { 0.1f, 0.2f })))
+ .Customize(x => x.WaitForNonStaleResults())
+ .ToList();
+```
+
+
+```sql
+from "Movies"
+// Wrap the source document field name with 'embedding.i8' to indicate the source data type
+where vector.search(embedding.i8(TagsEmbeddedAsInt8), $queryVector)
+{ "queryVector" : [64, 127, -51, -52, 76, 62] }
+```
+
+
+
+
+
+
+* Search on the `TagsEmbeddedAsBase64` field,
+ which contains numerical data represented in **_Base64_ format**.
+
+
+
+```csharp
+var similarMovies = session.Query()
+ .VectorSearch(
+ // Call 'WithBase64', specify:
+ // * The source field that contains the embeddings in the document
+ // * The source embedding type
+ // (the type from which the Base64 string was constructed)
+ field => field.WithBase64(x => x.TagsEmbeddedAsBase64, VectorEmbeddingType.Single),
+ // Call 'ByBase64'
+ // Provide the Base64 string that represents the vector to query against
+ queryVector => queryVector.ByBase64("zczMPc3MTD6amZk+"))
+ .Customize(x => x.WaitForNonStaleResults())
+ .ToList();
+```
+
+
+```sql
+from "Movies"
+// * Wrap the source document field name using 'embedding.' to specify
+// the source data type from which the Base64 string was generated.
+// * If the document field is Not wrapped, 'single' is assumed as the default source type.
+where vector.search(TagsEmbeddedAsBase64, $queryVectorBase64)
+{ "queryVectorBase64" : "zczMPc3MTD6amZk+" }
+```
+
+
+
+
+
+## Dynamic vector search - querying for similar documents
+
+* In the above examples, to find documents with similar content, the query was given an arbitrary input -
+ either a [raw textual search term](../../../ai-integration/vector-search/vector-search-using-dynamic-query.mdx#dynamic-vector-search---querying-text)
+ or a [numerical query vector](../../../ai-integration/vector-search/vector-search-using-dynamic-query.mdx#dynamic-vector-search---querying-numerical-content).
+
+* RavenDB also allows you to search for documents whose content is similar to an **existing document**:
+
+ * To do so, use the `ForDocument` method and specify the existing document ID. See the example below.
+
+ * When performing a dynamic vector query over a field, index-entries are generated in the auto-index,
+ one per document in the collection. Each index-entry contains the document ID and the embedding vector for the queried field.
+
+ * RavenDB retrieves the embedding that was indexed for the queried field in the specified document and uses it as the query vector for the similarity comparison.
+
+ * The results will include documents whose indexed embeddings are most similar to the one stored in the referenced document’s index-entry.
+
+
+
+```csharp
+var similarProducts = session.Query()
+ // Perform a vector search
+ // Call the 'VectorSearch' method
+ .VectorSearch(
+ // Call 'WithText'
+ // Specify the document field in which to search for similar values
+ field => field.WithText(x => x.Name),
+ // Call 'ForDocument'
+ // Provide the document ID for which you want to find similar documents.
+ // The embedding stored in the auto-index for the specified document
+ // will be used as the "query vector".
+ embedding => embedding.ForDocument("Products/7-A"),
+ 0.82f)
+ .Customize(x => x.WaitForNonStaleResults())
+ .ToList();
+```
+
+
+```csharp
+var similarProducts = await asyncSession.Query()
+ .VectorSearch(
+ field => field.WithText(x => x.Name),
+ embedding => embedding.ForDocument("Products/7-A"),
+ 0.82f)
+ .Customize(x => x.WaitForNonStaleResults())
+ .ToListAsync();
+```
+
+
+```csharp
+var similarProducts = session.Advanced
+ .DocumentQuery()
+ .VectorSearch(
+ field => field.WithText(x => x.Name),
+ embedding => embedding.ForDocument("Products/7-A"),
+ 0.82f)
+ .WaitForNonStaleResults()
+ .ToList();
+```
+
+
+```csharp
+var similarProducts = await asyncSession.Advanced
+ .AsyncDocumentQuery()
+ .VectorSearch(
+ field => field.WithText(x => x.Name),
+ embedding => embedding.ForDocument("Products/7-A"),
+ 0.82f)
+ .WaitForNonStaleResults()
+ .ToListAsync();
+```
+
+
+```csharp
+var similarProducts = session.Advanced
+ .RawQuery(@"
+ from 'Products'
+ // Pass a document ID to the 'forDoc' method to find similar documents
+ where vector.search(embedding.text(Name), embedding.forDoc($documentID), 0.82)")
+ .AddParameter("$documentID", "Products/7-A")
+ .WaitForNonStaleResults()
+ .ToList();
+```
+
+
+```csharp
+var similarProducts = await asyncSession.Advanced
+ .AsyncRawQuery(@"
+ from 'Products'
+ // Pass a document ID to the 'forDoc' method to find similar documents
+ where vector.search(embedding.text(Name), embedding.forDoc($documentID), 0.82)")
+ .AddParameter("$documentID", "Products/7-A")
+ .WaitForNonStaleResults()
+ .ToListAsync();
+```
+
+
+```sql
+from "Products"
+// Pass a document ID to the 'forDoc' method to find similar documents
+where vector.search(embedding.text(Name), embedding.forDoc($documentID), 0.82)
+{"documentID" : "Products/7-A"}
+```
+
+
+
+Running the above example on RavenDB’s sample data returns the following documents that have similar content in their _Name_ field:
+(Note: the results include the referenced document itself, _Products/7-A_)
+
+
+```csharp
+// ID: products/7-A ... Name: "Uncle Bob's Organic Dried Pears"
+// ID: products/51-A ... Name: "Manjimup Dried Apples"
+// ID: products/6-A ... Name: "Grandma's Boysenberry Spread"
+```
+
+
+The auto-index generated by running the above dynamic query is:
+`Auto/Products/ByVector.search(embedding.text(Name))`
+
+You can **view the index-entries** of this auto-index in the Studio's query view:
+
+
+
+1. Go to the Query view in the Studio
+2. Query the index
+3. Open the settings dialog:
+
+
+
+
+
+## Dynamic vector search - exact search
+
+* When performing a dynamic vector search query, you can specify whether to perform an **exact search** to find the closest similar vectors in the vector space:
+ * A thorough scan will be performed to find the actual closest vectors.
+ * This ensures better accuracy but comes at a higher computational cost.
+
+* If exact is Not specified, the search defaults to the **Approximate Nearest-Neighbor** (ANN) method,
+ which finds related vectors in an approximate manner, offering faster results.
+
+* The following example demonstrates how to specify the exact method in the query.
+ Setting the param is similar for both text and numerical content searches.
+
+
+
+```csharp
+var similarProducts = session.Query()
+ .VectorSearch(
+ field => field.WithText(x => x.Name),
+ searchTerm => searchTerm.ByText("italian food"),
+ // Optionally, set the 'isExact' param to true to perform an Exact search
+ isExact: true)
+ .Customize(x => x.WaitForNonStaleResults())
+ .ToList();
+```
+
+
+```csharp
+var similarProducts = await asyncSession.Query()
+ .VectorSearch(
+ field => field.WithText(x => x.Name),
+ searchTerm => searchTerm.ByText("italian food"),
+ isExact: true)
+ .Customize(x => x.WaitForNonStaleResults())
+ .ToListAsync();
+```
+
+
+```csharp
+var similarProducts = session.Advanced
+ .DocumentQuery()
+ .VectorSearch(
+ field => field.WithText(x => x.Name),
+ searchTerm => searchTerm.ByText("italian food"),
+ isExact: true)
+ .WaitForNonStaleResults()
+ .ToList();
+```
+
+
+```csharp
+var similarProducts = await asyncSession.Advanced
+ .AsyncDocumentQuery()
+ .VectorSearch(
+ field => field.WithText(x => x.Name),
+ searchTerm => searchTerm.ByText("italian food"),
+ isExact: true)
+ .WaitForNonStaleResults()
+ .ToListAsync();
+```
+
+
+```csharp
+var similarProducts = session.Advanced
+ .RawQuery(@"
+ from 'Products'
+ // Wrap the query with the 'exact()' method
+ where exact(vector.search(embedding.text(Name), $searchTerm))")
+ .AddParameter("searchTerm", "italian food")
+ .WaitForNonStaleResults()
+ .ToList();
+```
+
+
+```csharp
+var similarProducts = await asyncSession.Advanced
+ .AsyncRawQuery(@"
+ from 'Products'
+ // Wrap the query with the 'exact()' method
+ where exact(vector.search(embedding.text(Name), $searchTerm))")
+ .AddParameter("searchTerm", "italian food")
+ .WaitForNonStaleResults()
+ .ToListAsync();
+```
+
+
+```sql
+from "Products"
+// Wrap the vector.search query with the 'exact()' method
+where exact(vector.search(embedding.text(Name), "italian food"))
+```
+
+
+
+## Quantization options
+
+#### What is quantization:
+
+Quantization is a technique that reduces the precision of numerical data.
+It converts high-precision values, such as 32-bit floating-point numbers, into lower-precision formats like 8-bit integers or binary representations.
+
+The quantization process, applied to each dimension (or item) in the numerical array,
+serves as a form of compression by reducing the number of bits used to represent each value in the vector.
+For example, transitioning from 32-bit floats to 8-bit integers significantly reduces data size while preserving the vector's essential structure.
+
+Although it introduces some precision loss, quantization minimizes storage requirements and optimizes memory usage.
+It also reduces computational overhead, making operations like similarity searches faster and more efficient.
+
+#### Quantization in RavenDB:
+
+For non-quantized raw 32-bit data or text stored in your documents,
+RavenDB allows you to choose the quantization format for the generated embeddings stored in the index.
+The selected quantization type determines the similarity search technique that will be applied.
+
+If no target quantization format is specified, the `Single` option will be used as the default.
+
+The available quantization options are:
+
+ * `Single` (a 32-bit floating point value per dimension):
+ Provides precise vector representations.
+ The [Cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity) method will be used for searching and matching.
+
+ * `Int8` (an 8-bit integer value per dimension):
+ Reduces storage requirements while maintaining good performance.
+ Saves up to 75% storage compared to 32-bit floating-point values.
+ The Cosine similarity method will be used for searching and matching.
+
+ * `Binary` (1-bit per dimension):
+ Minimizes storage usage, suitable for use cases where binary representation suffices.
+ Saves approximately 96% storage compared to 32-bit floating-point values.
+ The [Hamming distance](https://en.wikipedia.org/wiki/Hamming_distance) method will be used for searching and matching.
+
+
+ If your documents contain data that is already quantized,
+ it cannot be re-quantized to a lower precision format (e.g., Int8 cannot be converted to Binary).
+
+
+#### Examples
+
+
+
+* In this example:
+ * The source data consists of text.
+ * The generated embeddings will use the _Int8_ format.
+
+
+
+```csharp
+var similarProducts = session.Query()
+ .VectorSearch(
+ field => field
+ // Specify the source text field for the embeddings
+ .WithText(x => x.Name)
+ // Set the quantization type for the generated embeddings
+ .TargetQuantization(VectorEmbeddingType.Int8),
+ searchTerm => searchTerm
+ // Provide the search term for comparison
+ .ByText("italian food"))
+ .Customize(x => x.WaitForNonStaleResults())
+ .ToList();
+```
+
+
+```csharp
+var similarProducts = await asyncSession.Query()
+ .VectorSearch(
+ field => field
+ .WithText(x => x.Name)
+ .TargetQuantization(VectorEmbeddingType.Int8),
+ searchTerm => searchTerm
+ .ByText("italian food"))
+ .Customize(x => x.WaitForNonStaleResults())
+ .ToListAsync();
+```
+
+
+```csharp
+var similarProducts = session.Advanced
+ .DocumentQuery()
+ .VectorSearch(
+ field => field
+ .WithText(x => x.Name)
+ .TargetQuantization(VectorEmbeddingType.Int8),
+ searchTerm => searchTerm
+ .ByText("italian food"))
+ .WaitForNonStaleResults()
+ .ToList();
+```
+
+
+```csharp
+var similarProducts = await asyncSession.Advanced
+ .AsyncDocumentQuery()
+ .VectorSearch(
+ field => field
+ .WithText(x => x.Name)
+ .TargetQuantization(VectorEmbeddingType.Int8),
+ searchTerm => searchTerm
+ .ByText("italian food"))
+ .WaitForNonStaleResults()
+ .ToListAsync();
+```
+
+
+```csharp
+var similarProducts = session.Advanced
+ .RawQuery(@"
+ from 'Products'
+ // Wrap the 'Name' field with 'embedding.text_i8'
+ where vector.search(embedding.text_i8(Name), $searchTerm)")
+ .AddParameter("searchTerm", "italian food")
+ .WaitForNonStaleResults()
+ .ToList();
+```
+
+
+```csharp
+var similarProducts = await asyncSession.Advanced
+ .AsyncRawQuery(@"
+ from 'Products'
+ // Wrap the 'Name' field with 'embedding.text_i8'
+ where vector.search(embedding.text_i8(Name), $searchTerm)")
+ .AddParameter("searchTerm", "italian food")
+ .WaitForNonStaleResults()
+ .ToListAsync();
+```
+
+
+```sql
+from "Products"
+// Wrap the 'Name' field with 'embedding.text_i8'
+where vector.search(embedding.text_i8(Name), $searchTerm)
+{ "searchTerm" : "italian food" }
+```
+
+
+
+
+
+
+* In this example:
+ * The source data is an array of 32-bit floats.
+ * The generated embeddings will use the _Binary_ format.
+
+
+
+```csharp
+var similarMovies = session.Query()
+ .VectorSearch(
+ field => field
+ // Specify the source field and its type
+ .WithEmbedding(x => x.TagsEmbeddedAsSingle, VectorEmbeddingType.Single)
+ // Set the quantization type for the generated embeddings
+ .TargetQuantization(VectorEmbeddingType.Binary),
+ queryVector => queryVector
+ // Provide the vector to use for comparison
+ .ByEmbedding(new RavenVector(new float[]
+ {
+ 6.599999904632568f, 7.699999809265137f
+ })))
+ .Customize(x => x.WaitForNonStaleResults())
+ .ToList();
+```
+
+
+```csharp
+var similarMovies = await asyncSession.Query()
+ .VectorSearch(
+ field => field
+ .WithEmbedding(x => x.TagsEmbeddedAsSingle, VectorEmbeddingType.Single)
+ .TargetQuantization(VectorEmbeddingType.Binary),
+ queryVector => queryVector
+ .ByEmbedding(new RavenVector(new float[]
+ {
+ 6.599999904632568f, 7.699999809265137f
+ })))
+ .Customize(x => x.WaitForNonStaleResults())
+ .ToListAsync();
+```
+
+
+```csharp
+var similarProducts = session.Advanced
+ .DocumentQuery()
+ .VectorSearch(
+ field => field
+ .WithEmbedding(x => x.TagsEmbeddedAsSingle, VectorEmbeddingType.Single)
+ .TargetQuantization(VectorEmbeddingType.Binary),
+ queryVector => queryVector
+ .ByEmbedding(new RavenVector(new float[]
+ {
+ 6.599999904632568f, 7.699999809265137f
+ })))
+ .WaitForNonStaleResults()
+ .ToList();
+```
+
+
+```csharp
+var similarProducts = await asyncSession.Advanced
+ .AsyncDocumentQuery()
+ .VectorSearch(
+ field => field
+ .WithEmbedding(x => x.TagsEmbeddedAsSingle, VectorEmbeddingType.Single)
+ .TargetQuantization(VectorEmbeddingType.Binary),
+ queryVector => queryVector
+ .ByEmbedding(new RavenVector(new float[]
+ {
+ 6.599999904632568f, 7.699999809265137f
+ })))
+ .WaitForNonStaleResults()
+ .ToListAsync();
+```
+
+
+```csharp
+var similarMovies = session.Advanced
+ .RawQuery(@"
+ from 'Movies'
+ // Wrap the 'TagsEmbeddedAsSingle' field with 'embedding.f32_i1'
+ where vector.search(embedding.f32_i1(TagsEmbeddedAsSingle), $queryVector)")
+ .AddParameter("queryVector", new RavenVector(new float[]
+ {
+ 6.599999904632568f, 7.699999809265137f
+ }))
+ .WaitForNonStaleResults()
+ .ToList();
+```
+
+
+```csharp
+var similarMovies = await asyncSession.Advanced
+ .AsyncRawQuery(@"
+ from 'Movies'
+ // Wrap the 'TagsEmbeddedAsSingle' field with 'embedding.f32_i1'
+ where vector.search(embedding.f32_i1(TagsEmbeddedAsSingle), $queryVector)")
+ .AddParameter("queryVector", new RavenVector(new float[]
+ {
+ 6.599999904632568f, 7.699999809265137f
+ }))
+ .WaitForNonStaleResults()
+ .ToListAsync();
+`
+
+
+```sql
+from "Movies"
+// Wrap the 'TagsEmbeddedAsSingle' field with 'embedding.f32_i1'
+where vector.search(embedding.f32_i1(TagsEmbeddedAsSingle), $queryVector)
+{ "queryVector" : { "@vector" : [6.599999904632568,7.699999809265137] }}
+```
+
+
+
+
+
+#### Field configuration methods in RQL:
+
+The following methods are available for performing a vector search via RQL:
+
+
+
+* `embedding.text`:
+ Generates embeddings from text as multi-dimensional vectors with 32-bit floating-point values,
+ without applying quantization.
+
+* `embedding.text_i8`:
+ Generates embeddings from text as multi-dimensional vectors with 8-bit integer values.
+
+* `embedding.text_i1`:
+ Generates embeddings from text as multi-dimensional vectors in a binary format.
+* `embedding.f32_i8`:
+ Converts multi-dimensional vectors with 32-bit floating-point values into vectors with 8-bit integer values.
+
+* `embedding.f32_i1`:
+ Converts multi-dimensional vectors with 32-bit floating-point values into vectors in a binary format.
+* `embedding.i8`:
+ Indicates that the source data is already quantized as Int8 (cannot be further quantized).
+
+* `embedding.i1`:
+ Indicates that the source data is already quantized as binary (cannot be further quantized).
+
+
+
+Wrap the field name using any of the relevant methods listed above, based on your requirements.
+For example, the following RQL encodes **text to Int8**:
+
+
+
+```sql
+from "Products"
+// Wrap the document field with 'embedding.text_i8'
+where vector.search(embedding.text_i8(Name), "italian food", 0.82, 20)
+```
+
+
+
+When the field name is Not wrapped in any method,
+the underlying values are treated as numerical values in the form of **32-bit floating-point** (Single) precision.
+For example, the following RQL will use the floating-point values as they are, without applying further quantization:
+
+
+
+```sql
+from "Movies"
+// No wrapping
+where vector.search(TagsEmbeddedAsSingle, $queryVector, 0.85, 10)
+{"queryVector" : { "@vector" : [6.599999904632568, 7.699999809265137] }}
+```
+
+
+
+## Querying vector fields and regular data in the same query
+
+* You can perform a vector search and a regular search in the same query.
+ A single auto-index will be created for both search predicates.
+
+* In the following example, results will include Product documents with content similar to "Italian food" in their _Name_ field and a _PricePerUnit_ above 20.
+ The following auto-index will be generated:
+ `Auto/Products/ByPricePerUnitAndVector.search(embedding.text(Name))`.
+
+
+
+```csharp
+var similarProducts = session.Query()
+ // Perform a filtering condition:
+ .Where(x => x.PricePerUnit > 35)
+ // Perform a vector search:
+ .VectorSearch(
+ field => field.WithText(x => x.Name),
+ searchTerm => searchTerm.ByText("italian food"),
+ 0.75f, 16)
+ .Customize(x => x.WaitForNonStaleResults())
+ .ToList();
+```
+
+
+```csharp
+var similarProducts = await asyncSession.Query()
+ .Where(x => x.PricePerUnit > 35)
+ .VectorSearch(
+ field => field.WithText(x => x.Name),
+ searchTerm => searchTerm.ByText("italian food"),
+ 0.75f, 16)
+ .Customize(x => x.WaitForNonStaleResults())
+ .ToListAsync();
+```
+
+
+```csharp
+var similarProducts = session.Advanced
+ .DocumentQuery()
+ .VectorSearch(
+ field => field.WithText(x => x.Name),
+ searchTerm => searchTerm.ByText("italian food"),
+ 0.75f, 16)
+ .WhereGreaterThan(x => x.PricePerUnit, 35)
+ .WaitForNonStaleResults()
+ .ToList();
+```
+
+
+```csharp
+var similarProducts = await asyncSession.Advanced
+ .AsyncDocumentQuery()
+ .VectorSearch(
+ field => field.WithText(x => x.Name),
+ searchTerm => searchTerm.ByText("italian food"),
+ 0.75f, 16)
+ .WhereGreaterThan(x => x.PricePerUnit, 35)
+ .WaitForNonStaleResults()
+ .ToListAsync();
+```
+
+
+```csharp
+var similarProducts = session.Advanced
+ .RawQuery(@"
+ from 'Products'
+ where (PricePerUnit > $minPrice) and (vector.search(embedding.text(Name), $searchTerm, 0.75, 16))")
+ .AddParameter("minPrice", 35.0)
+ .AddParameter("searchTerm", "italian food")
+ .WaitForNonStaleResults()
+ .ToList();
+```
+
+
+```csharp
+var similarProducts = await asyncSession.Advanced
+ .AsyncRawQuery(@"
+ from 'Products'
+ where (PricePerUnit > $minPrice) and (vector.search(embedding.text(Name), $searchTerm, 0.75, 16))")
+ .AddParameter("minPrice", 35.0)
+ .AddParameter("searchTerm", "italian food")
+ .WaitForNonStaleResults()
+ .ToListAsync();
+```
+
+
+```sql
+from "Products"
+// The filtering condition:
+where (PricePerUnit > $minPrice)
+and (vector.search(embedding.text(Name), $searchTerm, 0.75, 16))
+{ "minPrice" : 35.0, "searchTerm" : "italian food" }
+```
+
+
+
+
+
+**Impact of _NumberOfCandidates_ on query results**:
+
+* When combining a vector search with a filtering condition, the filter applies only to the documents retrieved within the `NumberOfCandidates` param limit.
+ Increasing or decreasing _NumberOfCandidates_ can affect the query results.
+ A larger _NumberOfCandidates_ increases the pool of documents considered,
+ improving the chances of finding results that match both the vector search and the filter condition.
+
+* For example, in the above query, the vector search executes with: Similarity `0.75f` and NumberOfCandidates `16`.
+ Running this query on RavenDB's sample data returns **2** documents.
+
+* However, if you increase _NumberOfCandidates_, the query will retrieve more candidate documents before applying the filtering condition.
+ If you run the following query:
+
+
+
+```sql
+from "Products"
+where (PricePerUnit > $minPrice)
+// Run vector search with similarity 0.75 and NumberOfCandidates 25
+and (vector.search(embedding.text(Name), $searchTerm, 0.75, 25))
+{ "minPrice" : 35.0, "searchTerm" : "italian food" }
+```
+
+
+
+ now the query returns **4** documents instead of **2**.
+
+
+
+## Combining multiple vector searches in the same query
+
+* You can combine multiple vector search statements in the same query using logical operators.
+ This is useful when you want to retrieve documents that match more than one vector-based criterion.
+
+* This can be done using [DocumentQuery](../../../client-api/session/querying/how-to-query.mdx#sessionadvanceddocumentquery),
+ [RawQuery](../../../client-api/session/querying/how-to-query.mdx#sessionadvancedrawquery) or raw [RQL](../../../client-api/session/querying/what-is-rql.mdx).
+
+* In the example below, the results will include companies that match one of two vector search conditions:
+ * Companies from European countries with a _Name_ similar to "snack"
+ * Or companies with a _Name_ similar to "dairy"
+
+* Running the query example on the RavenDB sample data will generate the following auto-index:
+ `Auto/Companies/ByVector.search(embedding.text(Address.Country))AndVector.search(embedding.text(Name))`.
+ This index includes two vector fields: _Address.Country_ and _Name_.
+
+
+
+```csharp
+var companies = session.Advanced
+ .DocumentQuery()
+ // Use OpenSubclause & CloseSubclause to differentiate between clauses:
+ // ====================================================================
+
+ .OpenSubclause()
+ .VectorSearch( // Search for companies that sell snacks or similar
+ field => field.WithText(x => x.Name),
+ searchTerm => searchTerm.ByText("snack"),
+ minimumSimilarity: 0.78f
+ )
+ // Use 'AndAlso' for an AND operation
+ .AndAlso()
+ .VectorSearch( // Search for companies located in Europe
+ field => field.WithText(x => x.Address.Country),
+ searchTerm => searchTerm.ByText("europe"),
+ minimumSimilarity: 0.82f
+ )
+ .CloseSubclause()
+ // Use 'OrElse' for an OR operation
+ .OrElse()
+ .OpenSubclause()
+ .VectorSearch( // Search for companies that sell dairy products or similar
+ field => field.WithText(x => x.Name),
+ v => v.ByText("dairy"),
+ minimumSimilarity: 0.80f
+ )
+ .CloseSubclause()
+ .WaitForNonStaleResults()
+ .ToList();
+```
+
+
+```csharp
+var companies = await asyncSession.Advanced
+ .AsyncDocumentQuery()
+ .OpenSubclause()
+ .VectorSearch(
+ field => field.WithText(x => x.Name),
+ searchTerm => searchTerm.ByText("snack"),
+ minimumSimilarity: 0.78f
+ )
+ .AndAlso()
+ .VectorSearch(
+ field => field.WithText(x => x.Address.Country),
+ searchTerm => searchTerm.ByText("europe"),
+ minimumSimilarity: 0.82f
+ )
+ .CloseSubclause()
+ .OrElse()
+ .OpenSubclause()
+ .VectorSearch(
+ field => field.WithText(x => x.Name),
+ searchTerm => searchTerm.ByText("dairy"),
+ minimumSimilarity: 0.80f
+ )
+ .CloseSubclause()
+ .WaitForNonStaleResults()
+ .ToListAsync();
+```
+
+
+```csharp
+var companies = session.Advanced
+ .RawQuery(@"
+ from Companies
+ where
+ (
+ vector.search(embedding.text(Name), $searchTerm1, 0.78)
+ and
+ vector.search(embedding.text(Address.Country), $searchTerm2, 0.82)
+ )
+ or
+ (
+ vector.search(embedding.text(Name), $searchTerm3, 0.80)
+ )
+ ")
+ .AddParameter("searchTerm1", "snack")
+ .AddParameter("searchTerm2","europe")
+ .AddParameter("searchTerm3", "dairy")
+ .WaitForNonStaleResults()
+ .ToList();
+```
+
+
+```csharp
+var companies = await asyncSession.Advanced
+ .AsyncRawQuery(@"
+ from Companies
+ where
+ (
+ vector.search(embedding.text(Name), $searchTerm1, 0.78)
+ and
+ vector.search(embedding.text(Address.Country), $searchTerm2, 0.82)
+ )
+ or
+ (
+ vector.search(embedding.text(Name), $searchTerm3, 0.80)
+ )
+ ")
+ .AddParameter("searchTerm1", "snack")
+ .AddParameter("searchTerm2","europe")
+ .AddParameter("searchTerm3", "dairy")
+ .WaitForNonStaleResults()
+ .ToListAsync();
+```
+
+
+```sql
+from "Companies"
+where
+(
+ vector.search(embedding.text(Name), $searchTerm1, 0.78)
+ and
+ vector.search(embedding.text(Address.Country), $searchTerm2, 0.82)
+)
+or
+(
+ vector.search(embedding.text(Name), $searchTerm3, 0.80)
+)
+{"searchTerm1" : "snack", "searchTerm2" : "europe", "searchTerm3" : "dairy"}
+```
+
+
+
+
+
+**How multiple vector search clauses are evaluated**:
+
+* Each vector search clause is evaluated independently - the search algorithm runs separately for each vector field.
+
+* Each clause retrieves a limited number of candidates, determined by the _NumberOfCandidates_ parameter.
+ * You can explicitly set this value in the query clause, see [query parameters](../../../ai-integration/vector-search/vector-search-using-dynamic-query.mdx#the-dynamic-query-parameters).
+ * If not specified, it is taken from the [Indexing.Corax.VectorSearch.DefaultNumberOfCandidatesForQuerying](../../../server/configuration/indexing-configuration.mdx#indexingcoraxvectorsearchdefaultnumberofcandidatesforquerying) configuration key (default is 16).
+
+* **The final result set** is computed by applying the logical operators (and, or) between these independently retrieved sets.
+
+* To improve the chances of getting intersecting results, consider increasing the _NumberOfCandidates_ in each vector search clause.
+ This expands the pool of documents considered by each clause, raising the likelihood of finding matches that satisfy the combined logic.
+
+
+
+## Syntax
+
+`VectorSearch`:
+
+
+```csharp
+public IRavenQueryable VectorSearch(
+ Func, IVectorEmbeddingTextField> textFieldFactory,
+ Action textValueFactory,
+ float? minimumSimilarity = null,
+ int? numberOfCandidates = null,
+ bool isExact = false);
+
+public IRavenQueryable VectorSearch(
+ Func, IVectorEmbeddingField> embeddingFieldFactory,
+ Action embeddingValueFactory,
+ float? minimumSimilarity = null,
+ int? numberOfCandidates = null,
+ bool isExact = false);
+
+public IRavenQueryable VectorSearch(
+ Func, IVectorField> embeddingFieldFactory,
+ Action embeddingValueFactory,
+ float? minimumSimilarity = null,
+ int? numberOfCandidates = null,
+ bool isExact = false);
+```
+
+
+| Parameter | Type | Description |
+|---------------------------|-----------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------|
+| **textFieldFactory** | `Func, IVectorEmbeddingTextField>` | Factory creating textual vector field for indexing purposes. |
+| **textValueFactory** | `Action` | Factory preparing queried data to be used in vector search. |
+| **embeddingFieldFactory** | `Func, IVectorEmbeddingField>` | Factory creating embedding vector field for indexing purposes. |
+| **embeddingValueFactory** | `Action` | Factory preparing queried data to be used in vector search. |
+| **embeddingFieldFactory** | `Func, IVectorField>` | Factory using existing, already indexed vector field. |
+| **embeddingValueFactory** | `Action` | Factory preparing queried data to be used in vector search. |
+| **minimumSimilarity** | `float?` | Minimum similarity between the queried value and the indexed value for the vector search to match. |
+| **numberOfCandidates** | `int?` | Number of candidate nodes for the HNSW algorithm. Higher values improve accuracy but require more computation. |
+| **isExact** | `bool` | `false` - vector search will be performed in an approximate manner. `true` - vector search will be performed in an exact manner. |
+
+The default value for `minimumSimilarity` is defined by this configuration key:
+[Indexing.Corax.VectorSearch.DefaultMinimumSimilarity ](../../../server/configuration/indexing-configuration.mdx#indexingcoraxvectorsearchdefaultnumberofcandidatesforquerying).
+
+The default value for `numberOfCandidates` is defined by this configuration key:
+[Indexing.Corax.VectorSearch.DefaultNumberOfCandidatesForQuerying](../../../server/configuration/indexing-configuration.mdx#indexingcoraxvectorsearchdefaultminimumsimilarity).
+
+`IVectorFieldFactory`:
+
+
+```csharp
+public interface IVectorFieldFactory
+{
+ // Methods for the dynamic query:
+ // ==============================
+
+ public IVectorEmbeddingTextField WithText(string documentFieldName);
+ public IVectorEmbeddingTextField WithText(Expression> propertySelector);
+
+ public IVectorEmbeddingField WithEmbedding(string documentFieldName,
+ VectorEmbeddingType storedEmbeddingQuantization = VectorEmbeddingType.Single);
+ public IVectorEmbeddingField WithEmbedding(Expression> propertySelector,
+ VectorEmbeddingType storedEmbeddingQuantization = VectorEmbeddingType.Single);
+
+ public IVectorEmbeddingField WithBase64(string documentFieldName,
+ VectorEmbeddingType storedEmbeddingQuantization = VectorEmbeddingType.Single);
+ public IVectorEmbeddingField WithBase64(Expression> propertySelector,
+ VectorEmbeddingType storedEmbeddingQuantization = VectorEmbeddingType.Single);
+
+ // Methods for querying a static index:
+ // ====================================
+
+ public IVectorField WithField(string indexFieldName);
+ public IVectorField WithField(Expression> indexPropertySelector);
+}
+```
+
+
+| Parameter | Type | Description |
+|---------------------------------|-------------------------------|----------------------------------------------------------------------------------------|
+| **documentFieldName** | `string` | The name of the document field containing text / embedding / base64 encoded data. |
+| **indexFieldName** | `string` | The name of the index-field that vector search will be performed on. |
+| **propertySelector** | `Expression>` | Path to the document field containing text / embedding /base64 encoded data. |
+| **indexPropertySelector** | `Expression>` | Path to the index-field containing indexed data. |
+| **storedEmbeddingQuantization** | `VectorEmbeddingType` | Quantization format of the stored embeddings. Default: `VectorEmbeddingType.Single` |
+
+`IVectorEmbeddingTextField` & `IVectorEmbeddingField`:
+
+
+```csharp
+public interface IVectorEmbeddingTextField
+{
+ public IVectorEmbeddingTextField TargetQuantization(
+ VectorEmbeddingType targetEmbeddingQuantization);
+
+ public IVectorEmbeddingTextField UsingTask(
+ string embeddingsGenerationTaskIdentifier);
+}
+
+public interface IVectorEmbeddingField
+{
+ public IVectorEmbeddingField TargetQuantization(
+ VectorEmbeddingType targetEmbeddingQuantization);
+}
+```
+
+
+| Parameter | Type | Description |
+|----------------------------------------|-----------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **targetEmbeddingQuantization** | `VectorEmbeddingType` | The desired target quantization format. |
+| **embeddingsGenerationTaskIdentifier** | `string ` | The identifier of an embeddings generation task. Used to locate the embeddings generated by the task in the [Embedding collections](../../../ai-integration/generating-embeddings/embedding-collections.mdx). |
+
+
+```csharp
+public enum VectorEmbeddingType
+{
+ Single,
+ Int8,
+ Binary,
+ Text
+}
+```
+
+
+`IVectorEmbeddingTextFieldValueFactory` & `IVectorEmbeddingFieldValueFactory`:
+
+
+```csharp
+public interface IVectorEmbeddingTextFieldValueFactory
+{
+ // Defines the queried text(s)
+ public void ByText(string text);
+ public void ByTexts(IEnumerable texts);
+
+ // Defines the queried text(s) and the embedding generation task to use.
+ // These overloads should be used only when querying a static-index where vector fields contain
+ // numerical embeddings that were not generated by RavenDB's built-in embedding tasks.
+ // The text is embedded at query time using the specified task ID and compared to the indexed vectors.
+ public void ByText(string text, string embeddingsGenerationTaskIdentifier);
+ public void ByTexts(IEnumerable texts, string embeddingsGenerationTaskIdentifier);
+
+ // Query by the embedding(s) indexed from the specified document for the queried field
+ public void ForDocument(string documentId);
+}
+
+public interface IVectorEmbeddingFieldValueFactory
+{
+ // Define the queried embedding:
+ // =============================
+
+ // 'embeddings' is an enumerable containing embedding values
+ public void ByEmbedding(IEnumerable embedding) where T : unmanaged, INumber;
+ public void ByEmbedding(IEnumerable> embeddings) where T : unmanaged, INumber;
+
+
+ // 'embeddings' is an array containing embedding values
+ public void ByEmbedding(T[] embedding) where T : unmanaged, INumber;
+ public void ByEmbeddings(T[][] embeddings) where T : unmanaged, INumber;
+
+ // 'embedding` is a `RavenVector` containing embedding values
+ public void ByEmbedding(RavenVector embedding) where T : unmanaged, INumber;
+
+ // 'base64Embedding' is encoded as base64 string(s).
+ public void ByBase64(string base64Embedding);
+ public void ByBase64(IEnumerable base64Embeddings);
+}
+```
+
+
+#### `RavenVector`:
+
+RavenVector is RavenDB's dedicated data type for storing and querying numerical embeddings.
+Learn more in [RavenVector](../../../ai-integration/vector-search/data-types-for-vector-search.mdx#ravenvector).
+
+
+```csharp
+public class RavenVector()
+{
+ public T[] Embedding { get; set; }
+}
+```
+
+
+#### `VectorQuanitzer`:
+
+RavenDB provides the following quantizer methods.
+Use them to transform your raw data to the desired format.
+Other quantizers may not be compatible.
+
+
+```csharp
+public static class VectorQuantizer
+{
+ public static sbyte[] ToInt8(float[] rawEmbedding);
+ public static byte[] ToInt1(ReadOnlySpan rawEmbedding);
+}
+```
+
diff --git a/versioned_docs/version-7.1/ai-integration/vector-search/content/_vector-search-using-static-index-csharp.mdx b/versioned_docs/version-7.1/ai-integration/vector-search/content/_vector-search-using-static-index-csharp.mdx
new file mode 100644
index 0000000000..b4edc0ca42
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/vector-search/content/_vector-search-using-static-index-csharp.mdx
@@ -0,0 +1,1822 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* This article explains how to perform a **vector search** using a **static index**.
+ **Prior to this article**, it is recommended to get familiar with the [Vector search using a dynamic query](../../../ai-integration/vector-search/vector-search-using-dynamic-query.mdx) article.
+
+* A static index allows you to define a **vector index-field**, enabling you to execute vector searches
+ while leveraging the advantages of RavenDB's [indexes](../../../indexes/what-are-indexes.mdx).
+
+* The vector search feature is only supported by indexes that use the [Corax search engine](../../../indexes/search-engine/corax.mdx).
+
+* In this article:
+ * [Indexing a vector field - Overview](../../../ai-integration/vector-search/vector-search-using-static-index.mdx#indexing-a-vector-field---overview)
+ * [Defining a vector field in a static index](../../../ai-integration/vector-search/vector-search-using-static-index.mdx#defining-a-vector-field-in-a-static-index)
+ * [Parameters defined at index definition](../../../ai-integration/vector-search/vector-search-using-static-index.mdx#parameters-defined-at-index-definition)
+ * [Behavior during indexing](../../../ai-integration/vector-search/vector-search-using-static-index.mdx#behavior-during-indexing)
+ * [Parameters used at query time](../../../ai-integration/vector-search/vector-search-using-static-index.mdx#parameters-used-at-query-time)
+ * [Behavior when documents are deleted](../../../ai-integration/vector-search/vector-search-using-static-index.mdx#vector-behavior-when-documents-are-deleted)
+ * [Indexing vector data - TEXT](../../../ai-integration/vector-search/vector-search-using-static-index.mdx#indexing-vector-data---text)
+ * [Indexing raw text](../../../ai-integration/vector-search/vector-search-using-static-index.mdx#indexing-raw-text)
+ * [Indexing pre-made text-embeddings](../../../ai-integration/vector-search/vector-search-using-static-index.mdx#indexing-pre-made-text-embeddings)
+ * [Indexing vector data - NUMERICAL](../../../ai-integration/vector-search/vector-search-using-static-index.mdx#indexing-vector-data---numerical)
+ * [Indexing numerical data and querying using numeric input](../../../ai-integration/vector-search/vector-search-using-static-index.mdx#indexing-numerical-data-and-querying-using-numeric-input)
+ * [Indexing numerical data and querying using text input](../../../ai-integration/vector-search/vector-search-using-static-index.mdx#indexing-numerical-data-and-querying-using-text-input)
+ * [Indexing multiple field types](../../../ai-integration/vector-search/vector-search-using-static-index.mdx#indexing-multiple-field-types)
+ * [Querying the static index for similar documents](../../../ai-integration/vector-search/vector-search-using-static-index.mdx#querying-the-static-index-for-similar-documents)
+ * [Configure the vector field in the Studio](../../../ai-integration/vector-search/vector-search-using-static-index.mdx#configure-the-vector-field-in-the-studio)
+
+
+
+## Indexing a vector field - Overview
+
+
+
+#### Defining a vector field in a static index
+
+To define a vector index-field in your static-index definition:
+
+* **From the Client API**:
+
+ **`LoadVector()`**:
+ When indexing **pre-made text-embeddings** generated by RavenDB's [Embeddings generation tasks](../../../ai-integration/generating-embeddings/overview.mdx),
+ use the `LoadVector()` method in your index definition.
+ An example is available in [Indexing pre-made text-embeddings](../../../ai-integration/vector-search/vector-search-using-static-index.mdx#indexing-pre-made-text-embeddings).
+
+ **`CreateVector()`**:
+ When indexing **your own data** (textual or numerical) that was not generated by these tasks,
+ use the `CreateVector()` method in your index definition.
+ An example is available in [Indexing raw text](../../../ai-integration/vector-search/vector-search-using-static-index.mdx#indexing-raw-text).
+
+* **From the Studio**:
+ See [Define a vector field in the Studio](../../../ai-integration/vector-search/vector-search-using-static-index.mdx#define-a-vector-field-in-the-studio).
+
+The **source data types** that can be used for vector search are detailed in [Data types for vector search](../../../ai-integration/vector-search/data-types-for-vector-search.mdx).
+
+
+
+
+
+#### Parameters defined at index definition
+
+The following params can be defined for the vector index-field in the index definition:
+
+**Source embedding type** -
+RavenDB supports performing vector search on TEXTUAL values or NUMERICAL arrays.
+This param specifies the embedding format of the source data to be indexed.
+Options include `Text`, `Single`, `Int8`, or `Binary`.
+
+**Destination embedding type** -
+Specify the quantization format for the embeddings that will be generated.
+Read more about quantization in [Quantization options](../../../ai-integration/vector-search/vector-search-using-dynamic-query.mdx#quantization-options).
+
+**Dimensions** -
+For numerical input only - define the size of the array from your source document.
+
+* If this param is Not provided -
+ the size will be determined by the first document indexed and will apply to all subsequent documents.
+
+* Ensure the dimensionality of these numerical arrays (i.e., their length) is consistent across all source documents for the indexed field.
+ An index error will occur if a source document has a different dimension for the indexed field.
+
+**Number of edges** -
+Specify the number of edges that will be created for a vector during indexing.
+If not specified, the default value is taken from the following configuration key: [Indexing.Corax.VectorSearch.DefaultNumberOfEdges](../../../server/configuration/indexing-configuration.mdx#indexingcoraxvectorsearchdefaultnumberofedges).
+
+**Number of candidates for indexing time** -
+The number of candidates (potential neighboring vectors) that RavenDB evaluates during vector indexing.
+If not specified, the default value is taken from the following configuration key: [Indexing.Corax.VectorSearch.DefaultNumberOfCandidatesForIndexing](../../../server/configuration/indexing-configuration.mdx#indexingcoraxvectorsearchdefaultnumberofcandidatesforindexing).
+(Note, this param differs from the number of candidates for query time).
+
+
+
+
+
+#### Behavior during indexing
+
+* **Raw textual input**:
+ When indexing raw textual input from your documents, RavenDB generates embedding vectors using the built-in
+ [bge-micro-v2](https://huggingface.co/TaylorAI/bge-micro-v2) sentence-transformer model, which are then indexed.
+
+* **Pre-made text-embeddings input**:
+ When indexing embeddings that are pre-generated from your documents' raw text by RavenDB's
+ [Embeddings generation tasks](../../../ai-integration/generating-embeddings/overview.mdx),
+ RavenDB indexes them without additional transformation, unless quantization is applied.
+
+* **Raw numerical input**:
+ When indexing pre-made numerical arrays that are already in vector format but were Not generated by these tasks,
+ such as numerical arrays you created externally, RavenDB indexes them without additional transformation,
+ unless quantization is applied.
+
+The embeddings are indexed on the server using the [HNSW algorithm](https://en.wikipedia.org/wiki/Hierarchical_navigable_small_world).
+This algorithm organizes embeddings into a high-dimensional graph structure,
+enabling efficient retrieval of Approximate Nearest Neighbors (ANN) during queries.
+
+
+
+
+
+#### Parameters used at query time
+
+**Minimum similarity** -
+You can specify the minimum similarity to use when searching for related vectors. Can be a value between `0.0f` and `1.0f`.
+A value closer to `1.0f` requires higher similarity between vectors, while a value closer to `0.0f` allows for less similarity.
+If not specified, the default value is taken from the following configuration key: [Indexing.Corax.VectorSearch.DefaultMinimumSimilarity](../../../server/configuration/indexing-configuration.mdx#indexingcoraxvectorsearchdefaultminimumsimilarity).
+
+**Number of candidates at query time** -
+You can specify the maximum number of vectors that RavenDB will return from a graph search.
+The number of the resulting documents that correspond to these vectors may be:
+
+ * lower than the number of candidates - when multiple vectors originated from the same document.
+
+ * higher than the number of candidates - when the same vector is shared between multiple documents.
+
+If not specified, the default value is taken from the following configuration key: [Indexing.Corax.VectorSearch.DefaultNumberOfCandidatesForQuerying](../../../server/configuration/indexing-configuration.mdx#indexingcoraxvectorsearchdefaultnumberofcandidatesforquerying).
+
+**Search method** -
+You can specify the search method at query time:
+
+ * _Approximate Nearest-Neighbor search_ (Default):
+ Search for related vectors in an approximate manner, providing faster results.
+
+ * _Exact search_:
+ Perform a thorough scan of the vectors to find the actual closest vectors,
+ offering better accuracy but at a higher computational cost.
+
+**To ensure consistent comparisons** -
+the search term is transformed into an embedding vector using the same method as the vector index-field.
+
+**Search results** -
+The server will search for the most similar vectors in the indexed vector space, taking into account all the parameters described.
+The documents that correspond to the resulting vectors are then returned to the client.
+
+By default, the resulting documents will be ordered by their score.
+You can modify this behavior using the [Indexing.Corax.VectorSearch.OrderByScoreAutomatically](../../../server/configuration/indexing-configuration.mdx#indexingcoraxvectorsearchorderbyscoreautomatically) configuration key.
+In addition, you can apply any of the 'order by' methods to your query, as explained in [sort query results](../../../client-api/session/querying/sort-query-results.mdx).
+
+
+
+
+
+#### Vector behavior when documents are deleted
+
+* RavenDB's implementation of the HNSW graph is append-only.
+
+* When all documents associated with a specific vector are deleted, the vector itself is Not physically removed but is soft-deleted.
+ This means the vector is marked as deleted and will no longer appear in query results.
+ Currently, compaction is not supported.
+
+
+
+---
+
+## Indexing vector data - TEXT
+
+### Indexing raw text
+
+The index in this example indexes data from raw text.
+For an index that indexes pre-made text-embeddings see [this example below](../../../ai-integration/vector-search/vector-search-using-static-index.mdx#indexing-pre-made-text-embeddings).
+
+The following index defines a **vector field** named `VectorfromText`.
+It indexes embeddings generated from the raw textual data in the `Name` field of all _Product_ documents.
+
+
+
+
+{`public class Products_ByVector_Text :
+ AbstractIndexCreationTask
+{
+ public class IndexEntry()
+ {
+ // This index-field will hold the embeddings that will be generated
+ // from the TEXT in the documents
+ public object VectorFromText { get; set; }
+ }
+
+ public Products_ByVector_Text()
+ {
+ Map = products => from product in products
+ select new IndexEntry
+ {
+ // Call 'CreateVector' to create a VECTOR FIELD.
+ // Pass the document field containing the text
+ // from which the embeddings will be generated.
+ VectorFromText = CreateVector(product.Name)
+ };
+
+ // You can customize the vector field using EITHER of the following syntaxes:
+ // ==========================================================================
+
+ // Customize using VectorOptions:
+ VectorIndexes.Add(x => x.VectorFromText,
+ new VectorOptions()
+ {
+ // Define the source embedding type
+ SourceEmbeddingType = VectorEmbeddingType.Text,
+
+ // Define the quantization for the destination embedding
+ DestinationEmbeddingType = VectorEmbeddingType.Single,
+
+ // Optionally, set the number of edges
+ NumberOfEdges = 20,
+
+ // Optionally, set the number of candidates
+ NumberOfCandidatesForIndexing = 20
+ });
+
+ // OR - Customize using builder:
+ Vector(x=>x.VectorFromText,
+ builder => builder
+ .SourceEmbedding(VectorEmbeddingType.Text)
+ .DestinationEmbedding(VectorEmbeddingType.Single)
+ .NumberOfEdges(20)
+ .NumberOfCandidates(20));
+
+ // The index MUST use the Corax search engine
+ SearchEngineType = Raven.Client.Documents.Indexes.SearchEngineType.Corax;
+ }
+}
+`}
+
+
+
+
+{`public class Products_ByVector_Text_JS : AbstractJavaScriptIndexCreationTask
+{
+ public Products_ByVector_Text_JS()
+ {
+ Maps = new HashSet()
+ {
+ @"map('Products', function (product) {
+ return {
+ VectorFromText: createVector(product.Name)
+ };
+ })"
+ };
+
+ Fields = new();
+ Fields.Add("VectorFromText", new IndexFieldOptions()
+ {
+ Vector = new VectorOptions()
+ {
+ SourceEmbeddingType = VectorEmbeddingType.Text,
+ DestinationEmbeddingType = VectorEmbeddingType.Single,
+ NumberOfEdges = 20,
+ NumberOfCandidatesForIndexing = 20
+ }
+ });
+
+ SearchEngineType = Raven.Client.Documents.Indexes.SearchEngineType.Corax;
+ }
+}
+`}
+
+
+
+
+{`var indexDefinition = new IndexDefinition
+{
+ Name = "Products/ByVector/Text",
+
+ Maps = new HashSet
+ {
+ @"
+ from product in docs.Products
+ select new
+ {
+ VectorFromText = CreateVector(product.Name)
+ }"
+ },
+
+ Fields = new Dictionary()
+ {
+ {
+ "VectorFromText",
+ new IndexFieldOptions()
+ {
+ Vector = new VectorOptions()
+ {
+ SourceEmbeddingType = VectorEmbeddingType.Text,
+ DestinationEmbeddingType = VectorEmbeddingType.Single,
+ NumberOfEdges = 20,
+ NumberOfCandidatesForIndexing = 20
+ }
+ }
+ }
+ },
+
+ Configuration = new IndexConfiguration()
+ {
+ ["Indexing.Static.SearchEngineType"] = "Corax"
+ }
+};
+
+store.Maintenance.Send(new PutIndexesOperation(indexDefinition));
+`}
+
+
+
+
+Execute a vector search using the index:
+Results will include _Product_ documents where the `Name` field is similar to the search term `"italian food"`.
+
+
+
+
+{`var similarProducts = session
+ .Query()
+ // Perform a vector search
+ // Call the 'VectorSearch' method
+ .VectorSearch(
+ field => field
+ // Call 'WithField'
+ // Specify the index-field in which to search for similar values
+ .WithField(x => x.VectorFromText),
+ searchTerm => searchTerm
+ // Call 'ByText'
+ // Provide the term for the similarity comparison
+ .ByText("italian food"),
+ // Optionally, specify the minimum similarity value
+ minimumSimilarity: 0.82f,
+ // Optionally, specify the number candidates for querying
+ numberOfCandidates: 20,
+ // Optionally, specify whether the vector search should use the 'exact search method'
+ isExact: true)
+ // Waiting for not-stale results is not mandatory
+ // but will assure results are not stale
+ .Customize(x => x.WaitForNonStaleResults())
+ .OfType()
+ .ToList();
+`}
+
+
+
+
+{`var similarProducts = await asyncSession
+ .Query()
+ .VectorSearch(
+ field => field
+ .WithField(x => x.VectorFromText),
+ searchTerm => searchTerm
+ .ByText("italian food"), 0.82f, 20, isExact: true)
+ .Customize(x => x.WaitForNonStaleResults())
+ .OfType()
+ .ToListAsync();
+`}
+
+
+
+
+{`var similarProducts = session.Advanced
+ .DocumentQuery()
+ .VectorSearch(
+ field => field
+ .WithField(x => x.VectorFromText),
+ searchTerm => searchTerm
+ .ByText("italian food"), 0.82f, 20, isExact: true)
+ .WaitForNonStaleResults()
+ .OfType()
+ .ToList();
+`}
+
+
+
+
+{`var similarProducts = await asyncSession.Advanced
+ .AsyncDocumentQuery()
+ .VectorSearch(
+ field => field
+ .WithField(x => x.VectorFromText),
+ searchTerm => searchTerm
+ .ByText("italian food"),
+ 0.82f, 20, isExact: true)
+ .WaitForNonStaleResults()
+ .OfType()
+ .ToListAsync();
+`}
+
+
+
+
+{`var similarProducts = session.Advanced
+ .RawQuery(@"
+ from index 'Products/ByVector/Text'
+ // Optionally, wrap the 'vector.search' query with 'exact()' to perform an exact search
+ where exact(vector.search(VectorFromText, $searchTerm, 0.82, 20))")
+ .AddParameter("searchTerm", "italian food")
+ .WaitForNonStaleResults()
+ .ToList();
+`}
+
+
+
+
+{`var similarProducts = await asyncSession.Advanced
+ .AsyncRawQuery(@"
+ from index 'Products/ByVector/Text'
+ // Optionally, wrap the 'vector.search' query with 'exact()' to perform an exact search
+ where exact(vector.search(VectorFromText, $searchTerm, 0.82, 20))")
+ .AddParameter("searchTerm", "italian food")
+ .WaitForNonStaleResults()
+ .ToListAsync();
+`}
+
+
+
+
+{`from index "Products/ByVector/Text"
+// Optionally, wrap the 'vector.search' query with 'exact()' to perform an exact search
+where exact(vector.search(VectorFromText, $searchTerm, 0.82, 20))
+{ "searchTerm" : "italian food" }
+`}
+
+
+
+
+### Indexing pre-made text-embeddings
+
+The index in this example defines a **vector field** named `VectorFromTextEmbeddings`.
+It indexes pre-made text-embeddings that were generated by this
+[embedding generation task](../../../ai-integration/generating-embeddings/embeddings-generation-task.mdx#configuring-an-embeddings-generation-task---from-the-studio).
+
+
+
+
+{`public class Categories_ByPreMadeTextEmbeddings :
+ AbstractIndexCreationTask
+{
+ public class IndexEntry()
+ {
+ // This index-field will hold the text embeddings
+ // that were pre-made by the Embeddings Generation Task
+ public object VectorFromTextEmbeddings { get; set; }
+ }
+
+ public Categories_ByPreMadeTextEmbeddings()
+ {
+ Map = categories => from category in categories
+ select new IndexEntry
+ {
+ // Call 'LoadVector' to create a VECTOR FIELD. Pass:
+ // * The document field name to be indexed (as a string)
+ // * The identifier of the task that generated the embeddings
+ // for the 'Name' field
+ VectorFromTextEmbeddings = LoadVector("Name", "id-for-task-open-ai")
+ };
+
+ VectorIndexes.Add(x => x.VectorFromTextEmbeddings,
+ new VectorOptions()
+ {
+ // Vector options can be customized
+ // in the same way as the above index example.
+ });
+
+ // The index MUST use the Corax search engine
+ SearchEngineType = Raven.Client.Documents.Indexes.SearchEngineType.Corax;
+ }
+}
+`}
+
+
+
+
+{`public class Categories_ByPreMadeTextEmbeddings_JS : AbstractJavaScriptIndexCreationTask
+{
+ public Categories_ByPreMadeTextEmbeddings_JS()
+ {
+ Maps = new HashSet()
+ {
+ @"map('Categories', function (category) {
+ return {
+ VectorFromTextEmbeddings:
+ loadVector('Name', 'id-for-task-open-ai')
+ };
+ })"
+ };
+
+ Fields = new();
+ Fields.Add("VectorFromTextEmbeddings", new IndexFieldOptions()
+ {
+ Vector = new VectorOptions()
+ {
+ // Vector options can be customized
+ // in the same way as the above index example.
+ }
+ });
+
+ SearchEngineType = Raven.Client.Documents.Indexes.SearchEngineType.Corax;
+ }
+}
+`}
+
+
+
+
+{`var indexDefinition = new IndexDefinition
+{
+ Name = "Categories/ByPreMadeTextEmbeddings",
+ Maps = new HashSet
+ {
+ @"
+ from category in docs.Categories
+ select new
+ {
+ VectorFromTextEmbeddings = LoadVector(""Name"", ""id-for-task-open-ai"")
+ }"
+ },
+
+ Fields = new Dictionary()
+ {
+ {
+ "VectorFromTextEmbeddings",
+ new IndexFieldOptions()
+ {
+ Vector = new VectorOptions()
+ {
+ // Vector options can be customized
+ // in the same way as the above index example.
+ }
+ }
+ }
+ },
+
+ Configuration = new IndexConfiguration()
+ {
+ ["Indexing.Static.SearchEngineType"] = "Corax"
+ }
+};
+
+store.Maintenance.Send(new PutIndexesOperation(indexDefinition));
+`}
+
+
+
+
+Execute a vector search using the index:
+Results will include _Category_ documents where the `Name` field is similar to the search term `"candy"`.
+
+
+
+
+{`var similarCategories = session
+ .Query()
+ // Perform a vector search
+ // Call the 'VectorSearch' method
+ .VectorSearch(
+ field => field
+ // Call 'WithField'
+ // Specify the index-field in which to search for similar values
+ .WithField(x => x.VectorFromTextEmbeddings),
+ searchTerm => searchTerm
+ // Call 'ByText'
+ // Provide the search term for the similarity comparison
+ .ByText("candy"),
+ // Optionally, specify the minimum similarity value
+ minimumSimilarity: 0.75f,
+ // Optionally, specify the number of candidates for querying
+ numberOfCandidates: 20,
+ // Optionally, specify whether the vector search should use the 'exact search method'
+ isExact: true)
+ // Waiting for not-stale results is not mandatory
+ // but will assure results are not stale
+ .Customize(x => x.WaitForNonStaleResults())
+ .OfType()
+ .ToList();
+`}
+
+
+
+
+{`var similarCategories = await asyncSession
+ .Query()
+ .VectorSearch(
+ field => field
+ .WithField(x => x.VectorFromTextEmbeddings),
+ searchTerm => searchTerm
+ .ByText("candy"), 0.75f, 20, isExact: true)
+ .Customize(x => x.WaitForNonStaleResults())
+ .OfType()
+ .ToListAsync();
+`}
+
+
+
+
+{`var similarCategories = session.Advanced
+ .DocumentQuery()
+ .VectorSearch(
+ field => field
+ .WithField(x => x.VectorFromTextEmbeddings),
+ searchTerm => searchTerm
+ .ByText("candy"), 0.75f, 20, isExact: true)
+ .WaitForNonStaleResults()
+ .OfType()
+ .ToList();
+`}
+
+
+
+
+{`var similarCategories = await asyncSession.Advanced
+ .AsyncDocumentQuery()
+ .VectorSearch(
+ field => field
+ .WithField(x => x.VectorFromTextEmbeddings),
+ searchTerm => searchTerm
+ .ByText("candy"),
+ 0.75f, 20, isExact: true)
+ .WaitForNonStaleResults()
+ .OfType()
+ .ToListAsync();
+`}
+
+
+
+
+{`var similarCategories = session.Advanced
+ .RawQuery(@"
+ from index 'Categories/ByPreMadeTextEmbeddings'
+ // Optionally, wrap the 'vector.search' query with 'exact()' to perform an exact search
+ where exact(vector.search(VectorFromTextEmbeddings, $searchTerm, 0.75, 20))")
+ .AddParameter("searchTerm", "candy")
+ .WaitForNonStaleResults()
+ .ToList();
+`}
+
+
+
+
+{`var similarCategories = await asyncSession.Advanced
+ .AsyncRawQuery(@"
+ from index 'Categories/ByPreMadeTextEmbeddings'
+ // Optionally, wrap the 'vector.search' query with 'exact()' to perform an exact search
+ where exact(vector.search(VectorFromTextEmbeddings, $searchTerm, 0.75, 20))")
+ .AddParameter("searchTerm", "candy")
+ .WaitForNonStaleResults()
+ .ToListAsync();
+`}
+
+
+
+
+{`from index "Categories/ByPreMadeTextEmbeddings"
+// Optionally, wrap the 'vector.search' query with 'exact()' to perform an exact search
+where exact(vector.search(VectorFromTextEmbeddings, $p0, 0.75, 20))
+{ "p0": "candy" }
+`}
+
+
+
+
+---
+
+## Indexing vector data - NUMERICAL
+
+
+
+* RavenDB’s [Embedding generation tasks](../../../ai-integration/generating-embeddings/overview.mdx) are typically used to generate vector embeddings from TEXTUAL data stored in your documents.
+ These embeddings are then stored in [dedicated collections](../../../ai-integration/generating-embeddings/embedding-collections.mdx).
+
+* However, you are not limited to using these built-in tasks.
+ You can generate your own NUMERICAL embeddings - from any source (e.g., text, image, audio, etc.) - using a suitable multimodal model, and store them:
+ * as numerical arrays in your documents’ properties, or
+ * as attachments associated with your documents.
+
+* This numerical data can be indexed in a vector field in a static-index.
+ Once indexed, you can query the vector field using either of the following:
+
+ * **Query using a numerical embedding (direct vector)**:
+ You provide a numerical array as the search term, and RavenDB compares it directly against the indexed embeddings.
+ See [Indexing numerical data and querying using numeric input](../../../ai-integration/vector-search/vector-search-using-static-index.mdx#indexing-numerical-data-and-querying-using-numeric-input).
+
+ * **Query using a text input**:
+ You provide a text string as the search term and specify an existing [Embedding generation task](../../../ai-integration/generating-embeddings/overview.mdx) that will convert this text into a vector embedding.
+ This will work only if:
+ * the vector field you're querying contains numerical embeddings that were created using the **same model** as the one configured in the specified task, and
+ * that task exists in your database (i.e., its identifier is still available).
+
+ In this case, RavenDB uses the task to transform the search term into an embedding, then compares it to the vector data that you had previously indexed yourself.
+ To improve performance, the generated embedding is cached, so repeated queries with the same search term don’t require re-computation.
+
+ This hybrid approach allows you to index custom embeddings (e.g., externally generated image vectors)
+ while still benefiting from RavenDB’s ability to perform semantic text search, as long as the same model was used for both.
+ See [Indexing numerical data and querying using text input](../../../ai-integration/vector-search/vector-search-using-static-index.mdx#indexing-numerical-data-and-querying-using-text-input).
+
+*
+ The examples in this section use the [sample data provided in the dynamic query article](../../../ai-integration/vector-search/vector-search-using-dynamic-query.mdx#sample-data).
+
+
+
+
+### Indexing numerical data and querying using numeric input
+
+The following index defines a vector field named `VectorFromSingle`.
+It indexes embeddings generated from the numerical data in the `TagsEmbeddedAsSingle` field of all _Movie_ documents.
+The raw numerical data in the source documents is in **32-bit floating-point format**.
+
+
+
+
+{`public class Movies_ByVector_Single :
+ AbstractIndexCreationTask
+{
+ public class IndexEntry()
+ {
+ // This index-field will hold the embeddings that will be generated
+ // from the NUMERICAL content in the documents.
+ public object VectorFromSingle { get; set; }
+ }
+
+ public Movies_ByVector_Single()
+ {
+ Map = movies => from movie in movies
+ select new IndexEntry
+ {
+ // Call 'CreateVector' to create a VECTOR FIELD.
+ // Pass the document field containing the array (32-bit floating-point values)
+ // from which the embeddings will be generated.
+ VectorFromSingle = CreateVector(movie.TagsEmbeddedAsSingle)
+ };
+
+ // EITHER - Customize the vector field using VectorOptions:
+ VectorIndexes.Add(x => x.VectorFromSingle,
+ new VectorOptions()
+ {
+ // Define the source embedding type
+ SourceEmbeddingType = VectorEmbeddingType.Single,
+
+ // Define the quantization for the destination embedding
+ DestinationEmbeddingType = VectorEmbeddingType.Single,
+
+ // It is recommended to configure the number of dimensions
+ // which is the size of the arrays that will be indexed.
+ Dimensions = 2,
+
+ // Optionally, set the number of edges and candidates
+ NumberOfEdges = 20,
+ NumberOfCandidatesForIndexing = 20
+ });
+
+ // OR - Customize the vector field using builder:
+ Vector(x => x.VectorFromSingle,
+ builder => builder
+ .SourceEmbedding(VectorEmbeddingType.Single)
+ .DestinationEmbedding(VectorEmbeddingType.Single)
+ .Dimensions(2)
+ .NumberOfEdges(20)
+ .NumberOfCandidates(20));
+
+ // The index MUST use the Corax search engine
+ SearchEngineType = Raven.Client.Documents.Indexes.SearchEngineType.Corax;
+ }
+}
+`}
+
+
+
+
+{`public class Movies_ByVector_Single_JS : AbstractJavaScriptIndexCreationTask
+{
+ public Movies_ByVector_Single_JS()
+ {
+ Maps = new HashSet()
+ {
+ @"map('Movies', function (movie) {
+ return {
+ VectorFromSingle: createVector(movie.TagsEmbeddedAsSingle)
+ };
+ })"
+ };
+
+ Fields = new();
+ Fields.Add("VectorFromSingle", new IndexFieldOptions()
+ {
+ Vector = new VectorOptions()
+ {
+ SourceEmbeddingType = VectorEmbeddingType.Single,
+ DestinationEmbeddingType = VectorEmbeddingType.Single,
+ Dimensions = 2,
+ NumberOfEdges = 20,
+ NumberOfCandidatesForIndexing = 20
+ }
+ });
+
+ SearchEngineType = Raven.Client.Documents.Indexes.SearchEngineType.Corax;
+ }
+}
+`}
+
+
+
+
+{`var indexDefinition = new IndexDefinition
+{
+ Name = "Movies/ByVector/Single",
+
+ Maps = new HashSet
+ {
+ @"
+ from movie in docs.Movies
+ select new
+ {
+ VectorFromSingle = CreateVector(movie.TagsEmbeddedAsSingle)
+ }"
+ },
+
+ Fields = new Dictionary()
+ {
+ {
+ "VectorFromSingle",
+ new IndexFieldOptions()
+ {
+ Vector = new VectorOptions()
+ {
+ SourceEmbeddingType = VectorEmbeddingType.Single,
+ DestinationEmbeddingType = VectorEmbeddingType.Single,
+ Dimensions = 2,
+ NumberOfEdges = 20,
+ NumberOfCandidatesForIndexing = 20
+ }
+ }
+ }
+ },
+
+ Configuration = new IndexConfiguration()
+ {
+ ["Indexing.Static.SearchEngineType"] = "Corax"
+ }
+};
+
+store.Maintenance.Send(new PutIndexesOperation(indexDefinition));
+`}
+
+
+
+
+Execute a vector search using the index:
+(Provide a vector as the search term to the `ByEmbedding` method)
+
+
+
+
+{`var similarMovies = session
+ .Query()
+ // Perform a vector search
+ // Call the 'VectorSearch' method
+ .VectorSearch(
+ field => field
+ // Call 'WithField'
+ // Specify the index-field in which to search for similar values
+ .WithField(x => x.VectorFromSingle),
+ queryVector => queryVector
+ // Call 'ByEmbedding'
+ // Provide the vector for the similarity comparison
+ .ByEmbedding(
+ new RavenVector(new float[] { 6.599999904632568f, 7.699999809265137f })))
+ .Customize(x => x.WaitForNonStaleResults())
+ .OfType()
+ .ToList();
+`}
+
+
+
+
+{`var similarMovies = await asyncSession
+ .Query()
+ .VectorSearch(
+ field => field
+ .WithField(x => x.VectorFromSingle),
+ queryVector => queryVector
+ .ByEmbedding(
+ new RavenVector(new float[] { 6.599999904632568f, 7.699999809265137f })))
+ .Customize(x => x.WaitForNonStaleResults())
+ .OfType()
+ .ToListAsync();
+`}
+
+
+
+
+{`var similarMovies = session.Advanced
+ .DocumentQuery()
+ .VectorSearch(
+ field => field
+ .WithField(x => x.VectorFromSingle),
+ queryVector => queryVector
+ .ByEmbedding(
+ new RavenVector(new float[] { 6.599999904632568f, 7.699999809265137f })))
+ .WaitForNonStaleResults()
+ .OfType()
+ .ToList();
+`}
+
+
+
+
+{`var similarMovies = await asyncSession.Advanced
+ .AsyncDocumentQuery()
+ .VectorSearch(
+ field => field
+ .WithField(x => x.VectorFromSingle),
+ queryVector => queryVector
+ .ByEmbedding(
+ new RavenVector(new float[] { 6.599999904632568f, 7.699999809265137f })))
+ .WaitForNonStaleResults()
+ .OfType()
+ .ToListAsync();
+`}
+
+
+
+
+{`var similarMovies = session.Advanced
+ .RawQuery(@"
+ from index 'Movies/ByVector/Single'
+ where vector.search(VectorFromSingle, $queryVector)")
+ .AddParameter("queryVector", new RavenVector(new float[]
+ {
+ 6.599999904632568f, 7.699999809265137f
+ }))
+ .WaitForNonStaleResults()
+ .ToList();
+`}
+
+
+
+
+{`var similarMovies = await asyncSession.Advanced
+ .AsyncRawQuery(@"
+ from index 'Movies/ByVector/Single'
+ where vector.search(VectorFromSingle, $queryVector)")
+ .AddParameter("queryVector", new RavenVector(new float[]
+ {
+ 6.599999904632568f, 7.699999809265137f
+ }))
+ .WaitForNonStaleResults()
+ .ToListAsync();
+`}
+
+
+
+
+{`from index "Movies/ByVector/Single"
+where vector.search(VectorFromSingle, $queryVector)
+{ "queryVector" : { "@vector" : [6.599999904632568, 7.699999809265137] }}
+`}
+
+
+
+
+The following index defines a vector field named `VectorFromInt8Arrays`.
+It indexes embeddings generated from the numerical arrays in the `TagsEmbeddedAsInt8` field of all _Movie_ documents.
+The raw numerical data in the source documents is in **Int8 (8-bit integers) format**.
+
+
+
+
+{`public class Movies_ByVector_Int8 :
+ AbstractIndexCreationTask
+{
+ public class IndexEntry()
+ {
+ // This index-field will hold the embeddings that will be generated
+ // from the NUMERICAL content in the documents.
+ public object VectorFromInt8Arrays { get; set; }
+ }
+
+ public Movies_ByVector_Int8()
+ {
+ Map = movies => from movie in movies
+ select new IndexEntry
+ {
+ // Call 'CreateVector' to create a VECTOR FIELD.
+ // Pass the document field containing the arrays (8-bit integer values)
+ // from which the embeddings will be generated.
+ VectorFromInt8Arrays = CreateVector(movie.TagsEmbeddedAsInt8)
+ };
+
+ // EITHER - Customize the vector field using VectorOptions:
+ VectorIndexes.Add(x => x.VectorFromInt8Arrays,
+ new VectorOptions()
+ {
+ // Define the source embedding type
+ SourceEmbeddingType = VectorEmbeddingType.Int8,
+
+ // Define the quantization for the destination embedding
+ DestinationEmbeddingType = VectorEmbeddingType.Int8,
+
+ // It is recommended to configure the number of dimensions
+ // which is the size of the arrays that will be indexed.
+ Dimensions = 2,
+
+ // Optionally, set the number of edges and candidates
+ NumberOfEdges = 20,
+ NumberOfCandidatesForIndexing = 20
+ });
+
+ // OR - Customize the vector field using builder:
+ Vector(x => x.VectorFromInt8Arrays,
+ builder => builder
+ .SourceEmbedding(VectorEmbeddingType.Int8)
+ .DestinationEmbedding(VectorEmbeddingType.Int8)
+ .Dimensions(2)
+ .NumberOfEdges(20)
+ .NumberOfCandidates(20));
+
+ // The index MUST use the Corax search engine
+ SearchEngineType = Raven.Client.Documents.Indexes.SearchEngineType.Corax;
+ }
+}
+`}
+
+
+
+
+{`public class Movies_ByVector_Int8_JS : AbstractJavaScriptIndexCreationTask
+{
+ public Movies_ByVector_Int8_JS()
+ {
+ Maps = new HashSet()
+ {
+ @"map('Movies', function (movie) {
+ return {
+ VectorFromInt8Arrays: createVector(movie.TagsEmbeddedAsInt8)
+ };
+ })"
+ };
+
+ Fields = new();
+ Fields.Add("VectorFromInt8Arrays", new IndexFieldOptions()
+ {
+ Vector = new VectorOptions()
+ {
+ SourceEmbeddingType = VectorEmbeddingType.Int8,
+ DestinationEmbeddingType = VectorEmbeddingType.Int8,
+ Dimensions = 2,
+ NumberOfEdges = 20,
+ NumberOfCandidatesForIndexing = 20
+ }
+ });
+
+ SearchEngineType = Raven.Client.Documents.Indexes.SearchEngineType.Corax;
+ }
+}
+`}
+
+
+
+
+{`var indexDefinition = new IndexDefinition
+{
+ Name = "Movies/ByVector/Int8",
+
+ Maps = new HashSet
+ {
+ @"
+ from movie in docs.Movies
+ select new
+ {
+ VectorFromInt8Arrays = CreateVector(movie.TagsEmbeddedAsInt8)
+ }"
+ },
+
+ Fields = new Dictionary()
+ {
+ {
+ "VectorFromInt8Arrays",
+ new IndexFieldOptions()
+ {
+ Vector = new VectorOptions()
+ {
+ SourceEmbeddingType = VectorEmbeddingType.Int8,
+ DestinationEmbeddingType = VectorEmbeddingType.Int8,
+ Dimensions = 2,
+ NumberOfEdges = 20,
+ NumberOfCandidatesForIndexing = 20
+ }
+ }
+ }
+ },
+
+ Configuration = new IndexConfiguration()
+ {
+ ["Indexing.Static.SearchEngineType"] = "Corax"
+ }
+};
+
+store.Maintenance.Send(new PutIndexesOperation(indexDefinition));
+`}
+
+
+
+
+Execute a vector search using the index:
+(Provide a vector as the search term to the `ByEmbedding` method)
+
+
+
+
+{`var similarMovies = session
+ .Query()
+ // Perform a vector search
+ // Call the 'VectorSearch' method
+ .VectorSearch(
+ field => field
+ // Call 'WithField'
+ // Specify the index-field in which to search for similar values
+ .WithField(x => x.VectorFromInt8Arrays),
+ queryVector => queryVector
+ // Call 'ByEmbedding'
+ // Provide the vector for the similarity comparison
+ // (Note: provide a single vector)
+ .ByEmbedding(
+ // The provided vector MUST be in the same format as was stored in your document
+ // Call 'VectorQuantizer.ToInt8' to transform the rawData to the Int8 format
+ VectorQuantizer.ToInt8(new float[] { 0.1f, 0.2f })))
+ .Customize(x => x.WaitForNonStaleResults())
+ .OfType()
+ .ToList();
+`}
+
+
+
+
+{`var similarMovies = await asyncSession
+ .Query()
+ .VectorSearch(
+ field => field
+ .WithField(x => x.VectorFromInt8Arrays),
+ queryVector => queryVector
+ .ByEmbedding(
+ VectorQuantizer.ToInt8(new float[] { 0.1f, 0.2f })))
+ .Customize(x => x.WaitForNonStaleResults())
+ .OfType()
+ .ToListAsync();
+`}
+
+
+
+
+{`var similarMovies = session.Advanced
+ .DocumentQuery()
+ .VectorSearch(
+ field => field
+ .WithField(x => x.VectorFromInt8Arrays),
+ queryVector => queryVector
+ .ByEmbedding(
+ VectorQuantizer.ToInt8(new float[] { 0.1f, 0.2f })))
+ .WaitForNonStaleResults()
+ .OfType()
+ .ToList();
+`}
+
+
+
+
+{`var similarMovies = await asyncSession.Advanced
+ .AsyncDocumentQuery()
+ .VectorSearch(
+ field => field
+ .WithField(x => x.VectorFromInt8Arrays),
+ queryVector => queryVector
+ .ByEmbedding(
+ VectorQuantizer.ToInt8(new float[] { 0.1f, 0.2f })))
+ .WaitForNonStaleResults()
+ .OfType()
+ .ToListAsync();
+`}
+
+
+
+
+{`var similarMovies = session.Advanced
+ .RawQuery(@"
+ from index 'Movies/ByVector/Int8'
+ where vector.search(VectorFromInt8Arrays, $queryVector)")
+ .AddParameter("queryVector", VectorQuantizer.ToInt8(new float[] { 0.1f, 0.2f }))
+ .WaitForNonStaleResults()
+ .ToList();
+`}
+
+
+
+
+{`var similarMovies = await asyncSession.Advanced
+ .AsyncRawQuery(@"
+ from index 'Movies/ByVector/Int8'
+ where vector.search(VectorFromInt8Arrays, $queryVector)")
+ .AddParameter("queryVector", VectorQuantizer.ToInt8(new float[] { 0.1f, 0.2f }))
+ .WaitForNonStaleResults()
+ .ToListAsync();
+`}
+
+
+
+
+{`from index "Movies/ByVector/Int8"
+where vector.search(VectorFromInt8Arrays, $queryVector)
+{ "queryVector" : [64, 127, -51, -52, 76, 62] }
+`}
+
+
+
+
+### Indexing numerical data and querying using text input
+
+The following index defines a vector field named `VectorFromPhoto`.
+It indexes embeddings generated from the numerical data in the `MoviePhotoEmbedding` field of all _Movie_ documents.
+
+
+
+```csharp
+public class Movies_ByVectorFromPhoto :
+ AbstractIndexCreationTask
+{
+ public class IndexEntry()
+ {
+ // This index-field will hold the embeddings that will be generated
+ // from the NUMERICAL content in the documents.
+ public object VectorFromPhoto { get; set; }
+ }
+
+ public Movies_ByVectorFromPhoto()
+ {
+ Map = movies => from movie in movies
+ select new IndexEntry
+ {
+ // Call 'CreateVector' to create a VECTOR FIELD.
+ // Pass the document field containing the array
+ // from which the embeddings will be generated.
+ VectorFromPhoto = CreateVector(movie.MoviePhotoEmbedding)
+ };
+
+ // Customize the vector field:
+ Vector(x => x.VectorFromPhoto,
+ builder => builder
+ .SourceEmbedding(VectorEmbeddingType.Single)
+ .DestinationEmbedding(VectorEmbeddingType.Single)
+ // Dimensions should match the embedding size, 6 is only for our simple example...
+ .Dimensions(6));
+
+ // The index MUST use the Corax search engine
+ SearchEngineType = Raven.Client.Documents.Indexes.SearchEngineType.Corax;
+ }
+}
+```
+
+
+```csharp
+public class Movies_ByVectorFromPhoto_JS : AbstractJavaScriptIndexCreationTask
+{
+ public Movies_ByVectorFromPhoto_JS()
+ {
+ Maps = new HashSet()
+ {
+ @"map('Movies', function (movie) {
+ return {
+ VectorFromPhoto: createVector(movie.MoviePhotoEmbedding)
+ };
+ })"
+ };
+
+ Fields = new();
+ Fields.Add("VectorFromPhoto", new IndexFieldOptions()
+ {
+ Vector = new VectorOptions()
+ {
+ SourceEmbeddingType = VectorEmbeddingType.Single,
+ DestinationEmbeddingType = VectorEmbeddingType.Single,
+ Dimensions = 6, // using 6 only for this simple example
+ }
+ });
+
+ SearchEngineType = Raven.Client.Documents.Indexes.SearchEngineType.Corax;
+ }
+}
+```
+
+
+```csharp
+var indexDefinition = new IndexDefinition
+{
+ Name = "Movies/ByVectorFromPhoto",
+
+ Maps = new HashSet
+ {
+ @"
+ from movie in docs.Movies
+ select new
+ {
+ VectorFromPhoto = CreateVector(movie.MoviePhotoEmbedding)
+ }"
+ },
+
+ Fields = new Dictionary()
+ {
+ {
+ "VectorFromPhoto",
+ new IndexFieldOptions()
+ {
+ Vector = new VectorOptions()
+ {
+ SourceEmbeddingType = VectorEmbeddingType.Single,
+ DestinationEmbeddingType = VectorEmbeddingType.Single,
+ Dimensions = 6, // using 6 only for this simple example
+ }
+ }
+ }
+ },
+
+ Configuration = new IndexConfiguration()
+ {
+ ["Indexing.Static.SearchEngineType"] = "Corax"
+ }
+};
+
+store.Maintenance.Send(new PutIndexesOperation(indexDefinition));
+```
+
+
+
+Execute a vector search using the index:
+
+ * Pass a textual search term to the `ByText` method,
+ along with the ID of the embedding generation task that will convert the text into an embedding.
+
+ * The query is only meaningful if the vector field being searched contains numerical embeddings
+ generated using the same model as the one configured in the specified task.
+
+ * If the specified task ID is not found, RavenDB will throw an `InvalidQueryException`.
+ To avoid this error, you can verify that the specified embeddings generation task exists before issuing the query.
+ See [Get embeddings generation task details](../ai-integration/generating-embeddings/overview#get-embeddings-generation-task-details)
+ to learn how to check which tasks are defined and what their identifiers are.
+
+
+
+```csharp
+// Query for movies with images related to 'NASA'
+var similarMovies = session
+ .Query()
+ // Perform a vector search
+ // Call the 'VectorSearch' method
+ .VectorSearch(
+ field => field
+ // Call 'WithField'
+ // Specify the index field that stores the image embeddings
+ .WithField(x => x.VectorFromPhoto),
+ queryVector => queryVector
+ // Call 'ByText'
+ // Provide a textual description to be embedded by the same multimodal model
+ // used for the MoviePhotoEmbedding field
+ .ByText("NASA", "id-of-embedding-generation-task"),
+ // As with any other vector search query, you can optionally specify
+ // 'minimumSimilarity', 'numberOfCandidates', and 'isExact'
+ minimumSimilarity: 0.85f)
+ .Customize(x => x.WaitForNonStaleResults())
+ .OfType()
+ .ToList();
+```
+
+
+```csharp
+var similarMovies = await asyncSession
+ .Query()
+ .VectorSearch(
+ field => field.WithField(x => x.VectorFromPhoto),
+ queryVector => queryVector.ByText("NASA", "id-of-embedding-generation-task"),
+ minimumSimilarity: 0.85f)
+ .Customize(x => x.WaitForNonStaleResults())
+ .OfType()
+ .ToListAsync();
+```
+
+
+```csharp
+var similarMovies = session.Advanced
+ .DocumentQuery()
+ .VectorSearch(
+ field => field.WithField(x => x.VectorFromPhoto),
+ queryVector => queryVector.ByText("NASA", "id-of-embedding-generation-task"), 0.85f)
+ .WaitForNonStaleResults()
+ .OfType()
+ .ToList();
+```
+
+
+```csharp
+var similarMovies = await asyncSession.Advanced
+ .AsyncDocumentQuery()
+ .VectorSearch(
+ "VectorFromPhoto",
+ queryVector => queryVector.ByText("NASA", "id-of-embedding-generation-task"), 0.85f)
+ .WaitForNonStaleResults()
+ .OfType()
+ .ToListAsync();
+```
+
+
+```csharp
+var similarMovies = session
+ .Advanced
+ .RawQuery(@"
+ from index 'Movies/ByVectorFromPhoto'
+ where vector.search(VectorFromPhoto, embedding.text($searchTerm, ai.task($embeddingTaskId)), 0.85, null)
+ ")
+ .AddParameter("searchTerm", "NASA")
+ .AddParameter("embeddingTaskId", "id-of-embedding-generation-task")
+ .ToList();
+```
+
+
+```csharp
+var similarMovies = await asyncSession
+ .Advanced
+ .RawQuery(@"
+ from index 'Movies/ByVectorFromPhoto'
+ where vector.search(VectorFromPhoto, embedding.text($searchTerm, ai.task($embeddingTaskId)), 0.85, null)
+ ")
+ .AddParameter("searchTerm", "NASA")
+ .AddParameter("embeddingTaskId", "id-of-embedding-generation-task")
+ .ToListAsync();
+```
+
+
+```sql
+from index 'Movies/ByVectorFromPhoto'
+where vector.search(VectorFromPhoto, embedding.text($searchTerm, ai.task($embeddingTaskId)), 0.85, null)
+{ "searchTerm" : "NASA", "embeddingTaskId" : "id-of-embedding-generation-task" }
+```
+
+
+
+---
+
+## Indexing multiple field types
+
+An index can define multiple types of index-fields. In this example, the index includes:
+A _'regular'_ field, a _'vector'_ field, and a field configured for [full-text search](../../../indexes/querying/searching.mdx).
+This allows you to query across all fields using various predicates.
+
+
+
+
+{`public class Products_ByMultipleFields :
+ AbstractIndexCreationTask
+{
+ public class IndexEntry()
+ {
+ // An index-field for 'regular' data
+ public decimal PricePerUnit { get; set; }
+
+ // An index-field for 'full-text' search
+ public string Name { get; set; }
+
+ // An index-field for 'vector' search
+ public object VectorFromText { get; set; }
+ }
+
+ public Products_ByMultipleFields()
+ {
+ Map = products => from product in products
+ select new IndexEntry
+ {
+ PricePerUnit = product.PricePerUnit,
+ Name = product.Name,
+ VectorFromText = CreateVector(product.Name)
+ };
+
+ // Configure the index-field 'Name' for FTS:
+ Index(x => x.Name, FieldIndexing.Search);
+
+ // Note:
+ // Default values will be used for the VECTOR FIELD if not customized here.
+
+ // The index MUST use the Corax search engine
+ SearchEngineType = Raven.Client.Documents.Indexes.SearchEngineType.Corax;
+ }
+}
+`}
+
+
+
+
+{`public class Products_ByMultipleFields_JS : AbstractJavaScriptIndexCreationTask
+{
+ public Products_ByMultipleFields_JS()
+ {
+ Maps = new HashSet()
+ {
+ @"map('Products', function (product) {
+ return {
+ PricePerUnit: product.PricePerUnit,
+ Name: product.Name,
+ VectorFromText: createVector(product.Name)
+ };
+ })"
+ };
+
+ Fields = new();
+ Fields.Add("Name", new IndexFieldOptions()
+ {
+ Indexing = FieldIndexing.Search
+ });
+
+ SearchEngineType = Raven.Client.Documents.Indexes.SearchEngineType.Corax;
+ }
+}
+`}
+
+
+
+
+{`var indexDefinition = new IndexDefinition
+{
+ Name = "Products/ByMultipleFields",
+ Maps = new HashSet
+ {
+ @"
+ from product in docs.Products
+ select new
+ {
+ PricePerUnit = product.PricePerUnit,
+ Name = product.Name,
+ VectorFromText = CreateVector(product.Name)
+ }"
+ },
+
+ Fields = new Dictionary()
+ {
+ {
+ "Name",
+ new IndexFieldOptions()
+ {
+ Indexing = FieldIndexing.Search
+ }
+ }
+ },
+
+ Configuration = new IndexConfiguration()
+ {
+ ["Indexing.Static.SearchEngineType"] = "Corax"
+ }
+};
+
+store.Maintenance.Send(new PutIndexesOperation(indexDefinition));
+`}
+
+
+
+
+Execute a query that combines predicates across all index-field types:
+
+
+
+
+{`var results = session.Advanced
+ .DocumentQuery()
+ // Perform a regular search
+ .WhereGreaterThan(x => x.PricePerUnit, 200)
+ .OrElse()
+ // Perform a full-text search
+ .Search(x => x.Name, "Alice")
+ .OrElse()
+ // Perform a vector search
+ .VectorSearch(
+ field => field
+ .WithField(x => x.VectorFromText),
+ searchTerm => searchTerm
+ .ByText("italian food"),
+ minimumSimilarity: 0.8f)
+ .WaitForNonStaleResults()
+ .OfType()
+ .ToList();
+`}
+
+
+
+
+{`var results = await asyncSession.Advanced
+ .AsyncDocumentQuery()
+ .WhereGreaterThan(x => x.PricePerUnit, 200)
+ .OrElse()
+ .Search(x => x.Name, "Alice")
+ .OrElse()
+ .VectorSearch(
+ field => field
+ .WithField(x => x.VectorFromText),
+ searchTerm => searchTerm
+ .ByText("italian food"),
+ minimumSimilarity: 0.8f)
+ .WaitForNonStaleResults()
+ .OfType()
+ .ToListAsync();
+`}
+
+
+
+
+{`var results = session.Advanced
+ .RawQuery(@"
+ from index 'Products/ByMultipleFields'
+ where PricePerUnit > $minPrice
+ or search(Name, $searchTerm1)
+ or vector.search(VectorFromText, $searchTerm2, 0.8)")
+ .AddParameter("minPrice", 200)
+ .AddParameter("searchTerm1", "Alice")
+ .AddParameter("searchTerm2", "italian")
+ .WaitForNonStaleResults()
+ .ToList();
+`}
+
+
+
+
+{`var results = await asyncSession.Advanced
+ .AsyncRawQuery(@"
+ from index 'Products/ByMultipleFields'
+ where PricePerUnit > $minPrice
+ or search(Name, $searchTerm1)
+ or vector.search(VectorFromText, $searchTerm2, 0.8)")
+ .AddParameter("minPrice", 200)
+ .AddParameter("searchTerm1", "Alice")
+ .AddParameter("searchTerm2", "italian")
+ .WaitForNonStaleResults()
+ .ToListAsync();
+`}
+
+
+
+
+{`from index "Products/ByMultipleFields"
+where PricePerUnit > $minPrice
+or search(Name, $searchTerm1)
+or vector.search(VectorFromText, $searchTerm2, 0.8)
+{ "minPrice" : 200, "searchTerm1" : "Alice", "searchTerm2": "italian" }
+`}
+
+
+
+
+---
+
+## Querying the static index for similar documents
+
+* Similar to [querying for similar documents using a dynamic query](../../../ai-integration/vector-search/vector-search-using-dynamic-query.mdx#dynamic-vector-search---querying-for-similar-documents),
+ you can **query a static-index for similar documents** by specifying a document ID in the vector search.
+
+* The following example queries the static-index defined in [this example](../../../ai-integration/vector-search/vector-search-using-static-index.mdx#indexing-vector-data---text) above.
+ The document for which we want to find similar documents is specified by the document ID passed to the `ForDocument` method.
+
+* RavenDB retrieves the embedding that was indexed for the queried field in the specified document and uses it as the query vector for the similarity comparison.
+
+* The results will include documents whose indexed embeddings are most similar to the one stored in the referenced document’s index-entry.
+
+
+
+
+{`var similarProducts = session
+ .Query()
+ // Perform a vector search
+ // Call the 'VectorSearch' method
+ .VectorSearch(
+ field => field
+ // Call 'WithField'
+ // Specify the index-field in which to search for similar values
+ .WithField(x => x.VectorFromText),
+ embedding => embedding
+ // Call 'ForDocument'
+ // Provide the document ID for which you want to find similar documents.
+ // The embedding stored in the index for the specified document
+ // will be used as the "query vector".
+ .ForDocument("Products/7-A"),
+ // Optionally, specify the minimum similarity value
+ minimumSimilarity: 0.82f)
+ .Customize(x => x.WaitForNonStaleResults())
+ .OfType()
+ .ToList();
+`}
+
+
+
+
+{`var similarCategories = await asyncSession
+ .Query()
+ .VectorSearch(
+ field => field
+ .WithField(x => x.VectorFromText),
+ embedding => embedding
+ .ForDocument("Products/7-A"),
+ minimumSimilarity: 0.82f)
+ .Customize(x => x.WaitForNonStaleResults())
+ .OfType()
+ .ToListAsync();
+`}
+
+
+
+
+{`var similarProducts = session.Advanced
+ .DocumentQuery()
+ .VectorSearch(
+ field => field
+ .WithField(x => x.VectorFromText),
+ embedding => embedding
+ .ForDocument("Products/7-A"),
+ minimumSimilarity: 0.82f)
+ .WaitForNonStaleResults()
+ .OfType()
+ .ToList();
+`}
+
+
+
+
+{`var similarProducts = await asyncSession.Advanced
+ .AsyncDocumentQuery()
+ .VectorSearch(
+ field => field
+ .WithField(x => x.VectorFromText),
+ embedding => embedding
+ .ForDocument("Products/7-A"),
+ minimumSimilarity: 0.82f)
+ .WaitForNonStaleResults()
+ .OfType()
+ .ToListAsync();
+`}
+
+
+
+
+{`var similarProducts = session.Advanced
+ .RawQuery(@"
+ from index 'Products/ByVector/Text'
+ // Pass a document ID to the 'forDoc' method to find similar documents
+ where vector.search(VectorFromText, embedding.forDoc($documentID), 0.82)")
+ .AddParameter("$documentID", "Products/7-A")
+ .WaitForNonStaleResults()
+ .ToList();
+`}
+
+
+
+
+{`var similarProducts = await asyncSession.Advanced
+ .AsyncRawQuery(@"
+ from index 'Products/ByVector/Text'
+ // Pass a document ID to the 'forDoc' method to find similar documents
+ where vector.search(VectorFromText, embedding.forDoc($documentID), 0.82)")
+ .AddParameter("$documentID", "Products/7-A")
+ .WaitForNonStaleResults()
+ .ToListAsync();
+`}
+
+
+
+
+{`from index "Products/ByVector/Text"
+// Pass a document ID to the 'forDoc' method to find similar documents
+where vector.search(VectorFromText, embedding.forDoc($documentID), 0.82)
+{"documentID" : "Products/7-A"}
+`}
+
+
+
+
+Running the above example on RavenDB’s sample data returns the following documents that have similar content in their _Name_ field:
+(Note: the results include the referenced document itself, _Products/7-A_)
+
+
+
+{`// ID: products/7-A ... Name: "Uncle Bob's Organic Dried Pears"
+// ID: products/51-A ... Name: "Manjimup Dried Apples"
+// ID: products/6-A ... Name: "Grandma's Boysenberry Spread"
+`}
+
+
+
+---
+
+## Configure the vector field in the Studio
+
+ 
+
+ 
+
+1. **Vector field name**
+ Enter the name of the vector field to customize.
+2. **Configure Vector Field**
+ Click this button to customize the field.
+3. **Dimensions**
+ For numerical input only - define the size of the array from your source document.
+4. **Edges**
+ The number of edges that will be created for a vector during indexing.
+5. **Source embedding type**
+ The format of the source embeddings (Text, Single, Int8, or Binary).
+6. **Candidates for indexing**
+ The number of candidates (potential neighboring vectors) that RavenDB evaluates during vector indexing.
+7. **Destination embedding type**
+ The quantization format for the embeddings that will be generated (Text, Single, Int8, or Binary).
diff --git a/versioned_docs/version-7.1/ai-integration/vector-search/data-types-for-vector-search.mdx b/versioned_docs/version-7.1/ai-integration/vector-search/data-types-for-vector-search.mdx
new file mode 100644
index 0000000000..06caaac31b
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/vector-search/data-types-for-vector-search.mdx
@@ -0,0 +1,31 @@
+---
+title: "Data Types for Vector Search"
+hide_table_of_contents: true
+sidebar_label: Data Types for Vector Search
+sidebar_position: 5
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import DataTypesForVectorSearchCsharp from './content/_data-types-for-vector-search-csharp.mdx';
+
+export const supportedLanguages = ["csharp"];
+
+
+
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/ai-integration/vector-search/indexing-attachments-for-vector-search.mdx b/versioned_docs/version-7.1/ai-integration/vector-search/indexing-attachments-for-vector-search.mdx
new file mode 100644
index 0000000000..0cf688186a
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/vector-search/indexing-attachments-for-vector-search.mdx
@@ -0,0 +1,31 @@
+---
+title: "Indexing Attachments for Vector Search"
+hide_table_of_contents: true
+sidebar_label: Indexing Attachments for Vector Search
+sidebar_position: 4
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import IndexingAttachmentsForVectorSearchCsharp from './content/_indexing-attachments-for-vector-search-csharp.mdx';
+
+export const supportedLanguages = ["csharp"];
+
+
+
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/ai-integration/vector-search/ravendb-as-vector-database.mdx b/versioned_docs/version-7.1/ai-integration/vector-search/ravendb-as-vector-database.mdx
new file mode 100644
index 0000000000..eb73ac1b65
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/vector-search/ravendb-as-vector-database.mdx
@@ -0,0 +1,115 @@
+---
+title: "RavenDB as a Vector Database"
+hide_table_of_contents: true
+sidebar_label: RavenDB as a Vector Database
+sidebar_position: 1
+---
+
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+# RavenDB as a Vector Database
+
+
+
+* In this article:
+ * [What is a vector database](../../ai-integration/vector-search/ravendb-as-vector-database.mdx#what-is-a-vector-database)
+ * [Why choose RavenDB as your vector database](../../ai-integration/vector-search/ravendb-as-vector-database.mdx#why-choose-ravendb-as-your-vector-database)
+
+
+
+## What is a vector database
+
+* A vector database stores data as high-dimensional numerical representations (embedding vectors),
+ enabling searches based on contextual meaning and vector similarity rather than exact keyword matches.
+ Instead of relying on traditional indexing, it retrieves relevant results by measuring how close vectors are in a multi-dimensional space.
+
+* Vector databases are widely used in applications such as:
+
+ * Semantic search – Finding documents based on meaning rather than exact words.
+ * Recommendation engines – Suggesting products, media, or content based on similarity.
+ * AI and machine learning – Powering LLMs, multi-modal search, and object detection.
+
+**Embeddings**:
+
+* A vector database stores data as high-dimensional vectors in a high-dimensional space.
+ These vectors, known as **embeddings**, are mathematical representations of your data.
+
+* Each embedding is an array of numbers (e.g. [0.45, 3.6, 1.25, 0.7, ...]), where each dimension represents specific characteristics of the data, capturing its contextual meaning.
+ Words, phrases, entire documents, images, audio, and other types of data can all be vectorized.
+
+* The raw data is converted into embeddings using [transformers](https://huggingface.co/docs/transformers).
+ To optimize storage and computation, transformers can encode embeddings with lower-precision data types, such as 8-bit integers, through a technique called [quantization](../../ai-integration/vector-search/vector-search-using-dynamic-query.mdx#quantization-options).
+
+**Indexing embeddings and searching**:
+
+* The embedding vectors are indexed and stored in a vector space.
+ Their positions reflect relationships and characteristics of the data as determined by the model that generated them.
+ The distance between two embeddings in the vector space correlates with the similarity of their original inputs within that model's context.
+
+* Vectors representing similar data are positioned close to each other in the vector space.
+ This is achieved using algorithms such as [HNSW](https://en.wikipedia.org/wiki/Hierarchical_navigable_small_world), which is designed for indexing and querying embeddings.
+ HNSW constructs a graph-based structure that efficiently retrieves approximate nearest neighbors in high-dimensional spaces.
+
+* This architecture enables **similarity searches**. Instead of conventional keyword-based queries,
+ a vector database lets you find relevant data based on semantic and contextual meaning.
+
+## Why choose RavenDB as your vector database
+
+##### An integrated solution:
+
+* RavenDB provides an integrated solution that combines high-performance NoSQL capabilities with advanced vector indexing and querying features,
+ enabling efficient storage and management of high-dimensional vector data.
+
+##### Reduced infrastructure complexity:
+
+* RavenDB's built-in vector search eliminates the need for external vector databases,
+ simplifying your infrastructure and reducing maintenance overhead.
+
+##### AI integration:
+
+* You can use RavenDB as the **vector database** for your AI-powered applications, including large language models (LLMs).
+ This eliminates the need to transfer data to expensive external services for vector similarity search,
+ providing a cost-effective and efficient solution for vector-based operations.
+
+##### Multiple field types in indexes:
+
+* An index can consist of multiple index-fields, each having a distinct type, such as a standard field, a spatial field, a full-text search field, or a **vector field**.
+ This flexibility allows you to work with complex documents containing various data types and retrieve meaningful insights by querying the index across all these fields.
+ An example is available in [Indexing multiple field types](../../ai-integration/vector-search/vector-search-using-static-index.mdx#indexing-multiple-field-types).
+
+* Document [attachments](../../ai-integration/vector-search/indexing-attachments-for-vector-search.mdx) can also be indexed as vector fields, and Map-Reduce indexes can incorporate vector fields in their reduce phase,
+ further extending the versatility of your data processing and search capabilities.
+
+##### Built-in embedding support:
+
+* **Textual input**:
+ Embeddings can be automatically generated from textual content within your documents by defining
+ [Embeddings generation tasks](../../ai-integration/generating-embeddings/overview.mdx).
+ These tasks connect to external embedding providers such as **Azure OpenAI, OpenAI, Hugging Face, Google AI, Ollama, or Mistral AI**.
+ If no task is specified, embeddings will be generated using the built-in [bge-micro-v2](https://huggingface.co/TaylorAI/bge-micro-v2) model.
+
+ When querying with a phrase, RavenDB generates an embedding for the search term using the same model applied to the document data
+ and compares it against the indexed embeddings.
+
+* **Numerical arrays input**:
+ Documents in RavenDB can also contain numerical arrays with **pre-made embeddings** created elsewhere.
+ Use RavenDB's dedicated data type, [RavenVector](../../ai-integration/vector-search/data-types-for-vector-search.mdx#ravenvector), to store these embeddings in your document entities.
+ This type is highly optimized to reduce storage space and enhance the speed of reading arrays from disk.
+
+* **HNSW algorithm usage**:
+ All embeddings, whether generated from textual input or pre-made numerical arrays,
+ are indexed and searched for using the [HNSW](https://en.wikipedia.org/wiki/Hierarchical_navigable_small_world) algorithm.
+
+* **Optimize storage via quantization**:
+ RavenDB allows you to select the quantization format for the generated embeddings when creating the index.
+ Learn more in [Quantization options](../../ai-integration/vector-search/vector-search-using-dynamic-query.mdx#quantization-options).
+
+* **Perform vector search**:
+ Leverage RavenDB's [Auto-indexes](../../ai-integration/vector-search/vector-search-using-dynamic-query.mdx)
+ and [Static indexes](../../ai-integration/vector-search/vector-search-using-static-index.mdx) to perform a vector search,
+ retrieving documents based on contextual similarity rather than exact word matches.
diff --git a/versioned_docs/version-7.1/ai-integration/vector-search/vector-search-using-dynamic-query.mdx b/versioned_docs/version-7.1/ai-integration/vector-search/vector-search-using-dynamic-query.mdx
new file mode 100644
index 0000000000..9df1df682d
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/vector-search/vector-search-using-dynamic-query.mdx
@@ -0,0 +1,31 @@
+---
+title: "Vector Search using a Dynamic Query"
+hide_table_of_contents: true
+sidebar_label: Vector Search using a Dynamic Query
+sidebar_position: 2
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import VectorSearchUsingDynamicQueryCsharp from './content/_vector-search-using-dynamic-query-csharp.mdx';
+
+export const supportedLanguages = ["csharp"];
+
+
+
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/ai-integration/vector-search/vector-search-using-static-index.mdx b/versioned_docs/version-7.1/ai-integration/vector-search/vector-search-using-static-index.mdx
new file mode 100644
index 0000000000..f288454d1a
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/vector-search/vector-search-using-static-index.mdx
@@ -0,0 +1,31 @@
+---
+title: "Vector Search using a Static Index"
+hide_table_of_contents: true
+sidebar_label: Vector Search using a Static Index
+sidebar_position: 3
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import VectorSearchUsingStaticIndexCsharp from './content/_vector-search-using-static-index-csharp.mdx';
+
+export const supportedLanguages = ["csharp"];
+
+
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/ai-integration/vector-search/vector-search_start.mdx b/versioned_docs/version-7.1/ai-integration/vector-search/vector-search_start.mdx
new file mode 100644
index 0000000000..c4a6ae42ac
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/vector-search/vector-search_start.mdx
@@ -0,0 +1,61 @@
+---
+title: "Vector search: Start"
+hide_table_of_contents: true
+sidebar_label: Start
+sidebar_position: 0
+---
+
+import Admonition from '@theme/Admonition';
+
+import CardWithImage from "@site/src/components/Common/CardWithImage";
+import CardWithImageHorizontal from "@site/src/components/Common/CardWithImageHorizontal";
+import ColGrid from "@site/src/components/ColGrid";
+import aiImageSearchWithRavenDbImage from "./assets/ai-image-search-with-ravendb.webp";
+
+import ayendeBlogImage from "@site/static/img/from-ayende-com.webp";
+import webinarThumbnailPlaceholder from "@site/static/img/webinar.webp";
+
+# Vector search
+
+### Search by meaning and context using vector search operations.
+Vector search operations allow you to compare [Embeddings](https://en.wikipedia.org/wiki/Embedding_(machine_learning)) to find content by similarity rather than by exact matches. E.g., to find text by meaning or image by context.
+- You can search over embeddings that were generated by RavenDB [ongoing embeddings-generation tasks](../../ai-integration/generating-embeddings/embeddings-generation-task) or by an external embeddings provider.
+- You can also generate the embeddings for your documents on-the-fly, while searching.
+- When you run a vector search, your search query is converted into an embedding as well, and compared against document embeddings using either a dynamic query for ad-hoc or infrequent searches, or a static index for optimized performance.
+- Vector search can be used by other RavenDB AI features. E.g., [AI agents](../../ai-integration/ai-agents/ai-agents_start) can use vector search to retrieve relevant data requested by the LLM.
+
+### Use cases
+Vector search can help wherever you need to find similar items based on proximity rather than exact matches, e.g. -
+* **Knowledge and document search**
+ Find relevant documentation, policies, legal texts, or enterprise reports using natural language queries.
+* **Product and content recommendations**
+ Suggest similar products, articles, videos, or media based on descriptive queries and user preferences.
+* **Customer support automation**
+ Route questions to the best help articles, retrieve guides, and power chatbot responses with relevant information.
+* **Business intelligence and analysis**
+ Profile customers and uncover market trends by comparing behavioral and relationship-based similarities.
+* **Media and content analysis**
+ Discover similar images, moderate content, and monitor social media for brand mentions and sentiment.
+
+### Technical documentation
+Learn about vector search operations, how they use embeddings to find content by meaning or context, their ability to generate embeddings on the fly during searches, and other key aspects of this feature.
+
+
+
+
+
+
+#### Learn more: In-depth vector search articles
+
+
+
+
+
+
+### Related lives & Videos
+Learn more about enhancing your applications using vector search operations.
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/ai-integration/vector-search/what-affects-vector-search-results.mdx b/versioned_docs/version-7.1/ai-integration/vector-search/what-affects-vector-search-results.mdx
new file mode 100644
index 0000000000..d8e8c4add4
--- /dev/null
+++ b/versioned_docs/version-7.1/ai-integration/vector-search/what-affects-vector-search-results.mdx
@@ -0,0 +1,172 @@
+---
+title: "What Affects Vector Search Results"
+hide_table_of_contents: true
+sidebar_label: What Affects Vector Search Results
+sidebar_position: 6
+---
+
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+# What Affects Vector Search Results
+
+
+
+* This article explains why vector search results might not always return what you expect, even when relevant documents exist.
+ It applies to both [Dynamic vector search queries](../../ai-integration/vector-search/vector-search-using-dynamic-query.mdx) and
+ [Static-index vector search queries](../../ai-integration/vector-search/vector-search-using-static-index.mdx).
+
+* Vector search in RavenDB uses the [HNSW](https://en.wikipedia.org/wiki/Hierarchical_navigable_small_world) algorithm (Hierarchical Navigable Small World)
+ to index and search high-dimensional vector embeddings efficiently.
+ This algorithm prioritizes performance, speed, and scalability over exact precision.
+ Due to its approximate nature, results may occasionally exclude some relevant documents.
+
+* Several **indexing-time parameters** affect how the vector graph is built, and **query-time parameters** affect how the graph is searched.
+ These settings influence the trade-off between speed and accuracy.
+
+* If full accuracy is required, RavenDB also provides [Exact vector search](../../ai-integration/vector-search/what-affects-vector-search-results.mdx#using-exact-search),
+ which performs a full scan of all indexed vectors to guarantee the closest matches.
+* In this article:
+ * [The approximate nature of HNSW](../../ai-integration/vector-search/what-affects-vector-search-results.mdx#the-approximate-nature-of-hnsw)
+ * [Indexing-time parameters](../../ai-integration/vector-search/what-affects-vector-search-results.mdx#indexing-time-parameters)
+ * [Query-time parameters](../../ai-integration/vector-search/what-affects-vector-search-results.mdx#query-time-parameters)
+ * [Using exact search](../../ai-integration/vector-search/what-affects-vector-search-results.mdx#using-exact-search)
+
+
+
+## The approximate nature of HNSW
+
+* **Graph structure**:
+
+ * HNSW builds a multi-layer graph, organizing vectors into a series of layers:
+ Top layers are sparse and support fast, broad navigation.
+ The bottom layer is dense and includes all indexed vectors for fine-grained matching.
+ * Each node (vector) is connected only to a limited number of neighbors, selected as the most relevant during indexing (graph build time).
+ This limitation is controlled by the [Indexing-time parameters](../../ai-integration/vector-search/what-affects-vector-search-results.mdx#indexing-time-parameters) described below.
+ * This structure speeds up search but increases the chance that a relevant document is not reachable -
+ especially if it's poorly connected.
+
+* **Insertion order effects**:
+
+ * Because the HNSW graph is append-only and built incrementally,
+ the order in which documents are inserted can affect the final graph structure.
+ * Updates and deletes do not change the structure - deleted vectors are not physically removed, but marked as deleted (soft-deleted),
+ and updates typically replace a document by marking the old one as deleted and inserting a new one.
+ * This means that two databases containing the same documents may return different vector search results
+ if the documents were inserted in a different order.
+
+* **Greedy search**:
+
+ * HNSW uses a greedy search strategy to perform approximate nearest-neighbor (ANN) searches:
+ The search starts at the top layer from an entry point.
+ The algorithm then descends through the layers, always choosing the neighbor closest to the query vector.
+ * The algorithm doesn't exhaustively explore all possible paths, so it can miss the true global nearest neighbors -
+ especially if they are not well-connected in the graph.
+ This design enables HNSW to find relevant results very quickly by focusing only on the most promising paths, making it highly efficient even for large datasets.
+ * The search is influenced by the [Query-time params](../../ai-integration/vector-search/what-affects-vector-search-results.mdx#query-time-parameters) described below.
+ Slight variations in graph structure or search parameters can lead to different results.
+ * While HNSW offers fast search performance at scale and quickly finds points that are likely to be among the nearest neighbors,
+ it does not guarantee exact results - only approximate matches are returned.
+ This behavior is expected in all ANN algorithms, not just HNSW.
+ If full accuracy is critical, consider using [Exact search](../../ai-integration/vector-search/what-affects-vector-search-results.mdx#using-exact-search) instead.
+
+## Indexing-time parameters
+
+The structure of the HNSW graph is determined at indexing time.
+RavenDB provides the following configuration parameters that control how the graph is built.
+These parameters influence how vectors are connected and how effective the search will be.
+They help keep memory usage and indexing time under control, but may also limit the graph’s ability to precisely represent all possible proximity relationships.
+
+* **Number of edges**:
+
+ * This parameter, which corresponds to the _M_ parameter in the original [HNSW paper](https://arxiv.org/abs/1603.09320),
+ controls how many connections (edges) each vector maintains in the HNSW graph.
+ Each node (vector) is connected to a limited number of neighbors in each layer - up to the value specified by this param.
+ These edges define the structure of the graph and affect how vectors are reached during search.
+ * A **larger** number of edges increases the graph’s density, improving connectivity and typically resulting in more accurate search results,
+ but it may also increase memory usage, slow down index construction, and result in a larger index.
+ A **smaller** value reduces memory usage and speeds up indexing,
+ but can result in a sparser graph with weaker connectivity and reduced search accuracy.
+ * With **static-indexes** -
+ This param can be set directly in the index definition. For example, see this [index definition](../../ai-integration/vector-search/vector-search-using-static-index.mdx#indexing-raw-text).
+ If not explicitly set, or when using **dynamic queries** -
+ the value is taken from the [Indexing.Corax.VectorSearch.DefaultNumberOfEdges](../../server/configuration/indexing-configuration.mdx#indexingcoraxvectorsearchdefaultnumberofedges) configuration key.
+
+* **Number of candidates at indexing time**:
+
+ * During index construction, HNSW searches for potential neighbors when inserting each new vector into the graph.
+ This parameter (commonly referred to as _efConstruction_) controls how many neighboring vectors are considered during this process.
+ It defines the size of the candidate pool - the number of potential links evaluated for each insertion.
+ From the candidate pool, HNSW selects up to the configured _number of edges_ for each node.
+ * A **larger** candidate pool increases the chance of finding better-connected neighbors, improving the overall accuracy of the graph.
+ However, it may increase indexing time and memory usage.
+ A **smaller** value speeds up indexing and reduces resource usage,
+ but can result in a sparser and less accurate graph structure.
+ * With **static-indexes** -
+ This param can be set directly in the index definition. For example, see this [index definition](../../ai-integration/vector-search/vector-search-using-static-index.mdx#indexing-raw-text).
+ If not explicitly set, or when using **dynamic queries** -
+ the value is taken from the [Indexing.Corax.VectorSearch.DefaultNumberOfCandidatesForIndexing](../../server/configuration/indexing-configuration.mdx#indexingcoraxvectorsearchdefaultnumberofcandidatesforindexing) configuration key.
+
+For all parameters that can be defined at indexing time (including the ones above),
+see [Parameters defined at index definition](../../ai-integration/vector-search/vector-search-using-static-index.mdx#parameters-defined-at-index-definition).
+
+## Query-time parameters
+
+Once the index is built, the following query-time parameters influence the vector search - controlling how the HNSW graph is traversed and how results are selected.
+These parameters directly affect how many results are found, how similar they are to the input vector, and how they are ranked.
+
+* **Number of Candidates at query time**:
+
+ * This parameter (commonly referred to as _efSearch_) controls how many nodes in the HNSW graph are evaluated during a vector search -
+ that is, how many candidates are considered before the search stops.
+ It defines the size of the priority queue used during the search: the number of best-so-far candidates that RavenDB will track and expand as it descends through the graph.
+ * A **larger** value increases the breadth of the search, allowing the algorithm to explore a wider set of possible neighbors
+ and typically improving accuracy and the chances of retrieving all relevant results - but this comes at the cost of slower query performance.
+ A **smaller** value speeds up queries and reduces resource usage, but increases the chance of missing relevant results due to the more limited exploration.
+ * This param can be set directly in the query. For example, see this [Query example](../../ai-integration/vector-search/vector-search-using-dynamic-query.mdx#querying-raw-text).
+ If not explicitly set, the value is taken from the [Indexing.Corax.VectorSearch.DefaultNumberOfCandidatesForQuerying](../../server/configuration/indexing-configuration.mdx#indexingcoraxvectorsearchdefaultnumberofcandidatesforquerying) configuration key.
+
+* **Minimum Similarity**:
+
+ * This parameter defines a threshold between `0.0` and `1.0` that determines how similar a vector must be to the query in order to be included in the results.
+ * Vectors with a similarity score below this threshold are excluded from the results -
+ even if they would otherwise be among the top candidates.
+ Use this to filter out marginal matches, especially when minimum semantic relevance is important.
+ * This param can be set directly in the query. For example, see this [Query example](../../ai-integration/vector-search/vector-search-using-dynamic-query.mdx#querying-raw-text).
+ If not explicitly set in the query, the value is taken from the [Indexing.Corax.VectorSearch.DefaultMinimumSimilarity](../../server/configuration/indexing-configuration.mdx#indexingcoraxvectorsearchdefaultminimumsimilarity) configuration key.
+ The default value of this configuration key is `0.0`, which means no similarity filtering is applied - all candidates found during the search are eligible to be returned,
+ regardless of how dissimilar they are from the query vector.
+
+* **Search Method**:
+
+ * You can choose between two vector search modes:
+ * **Approximate search** (default):
+ Uses the HNSW algorithm for fast, scalable search. While it doesn’t guarantee the absolute nearest vectors,
+ it is typically accurate and strongly recommended in most scenarios due to its performance.
+ * **Exact search**:
+ Performs a full comparison against all indexed vectors to guarantee the closest matches.
+ Learn more in [Using exact search](../../ai-integration/vector-search/what-affects-vector-search-results.mdx#using-exact-search) below.
+
+For all parameters that can be defined at query time, see:
+Dynamic queries - [The dynamic query parameters](../../ai-integration/vector-search/vector-search-using-dynamic-query.mdx#the-dynamic-query-parameters).
+Static index queries - [Parameters used at query time](../../ai-integration/vector-search/vector-search-using-static-index.mdx#parameters-used-at-query-time).
+
+## Using exact search
+
+* If you need precise control over results and want to avoid the approximations of HNSW,
+ you can perform an exact search instead.
+
+* Exact search performs a full scan of the vector space, comparing the query vector to every indexed vector.
+ This guarantees that the true closest matches are returned.
+
+* While exact search provides guaranteed accuracy, it is more resource-intensive and may be slower - especially for large indexes.
+ However, if the index is small, exact search can still offer reasonable performance.
+ The approximate search remains strongly recommended in most scenarios due to its performance.
+ Use exact search only when maximum precision is critical and the performance trade-off is acceptable.
+
+* Exact search can be used with both static index queries and dynamic queries.
+ For example, see [Dynamic vector search - exact search](../../ai-integration/vector-search/vector-search-using-dynamic-query.mdx#dynamic-vector-search---exact-search).
diff --git a/versioned_docs/version-7.1/client-api/_category_.json b/versioned_docs/version-7.1/client-api/_category_.json
new file mode 100644
index 0000000000..b5b683de90
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/_category_.json
@@ -0,0 +1,4 @@
+{
+ "position": 1,
+ "label": "Client API"
+}
diff --git a/versioned_docs/version-7.1/client-api/_creating-document-store-csharp.mdx b/versioned_docs/version-7.1/client-api/_creating-document-store-csharp.mdx
new file mode 100644
index 0000000000..9768e15cb9
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/_creating-document-store-csharp.mdx
@@ -0,0 +1,125 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* **Creating a Document Store** is the _first step_ that a RavenDB client application needs to make when working with RavenDB.
+
+* We recommend that your Document Store implement the [Singleton Pattern](https://csharpindepth.com/articles/Singleton) as demonstrated in
+the example code [below](../client-api/creating-document-store.mdx#creating-a-document-store---example).
+Creating more than one Document Store may be resource intensive, and one instance is sufficient for most use cases.
+
+* In this page:
+ * [Creating a Document Store - Configuration](../client-api/creating-document-store.mdx#creating-a-document-store---configuration)
+ * [Certificate Disposal](../client-api/creating-document-store.mdx#certificate-disposal)
+ * [Creating a Document Store - Example](../client-api/creating-document-store.mdx#creating-a-document-store---example)
+
+## Creating a Document Store - Configuration
+
+The following properties can be configured when creating a new Document Store:
+
+* **Urls** (required)
+
+ * An initial URLs list of your RavenDB cluster nodes that is used when the client accesses the database for the _first_ time.
+
+ * Upon the first database access, the client will fetch the [Database Group Topology](../studio/database/settings/manage-database-group.mdx)
+ from the first server on this list that it successfully connected to. An exception is thrown if the client fails to connect with neither
+ of the servers specified on this list. The URLs from the Database Group Topology will supersede this initial URLs list for any future
+ access to that database.
+
+ * **Note**: Do not create a Document Store with URLs that point to servers outside of your cluster.
+
+ * **Note**: This list is not binding. You can always modify your cluster later dynamically, add new nodes or remove existing ones as
+ necessary. Learn more in [Cluster View Operations](../studio/cluster/cluster-view.mdx#cluster-view-operations).
+
+* **[Database](../client-api/setting-up-default-database.mdx)** (optional)
+ The default database which the Client will work against.
+ A different database can be specified when creating a [Session](../client-api/session/opening-a-session.mdx) if needed.
+
+* **[Conventions](../client-api/configuration/conventions.mdx)** (optional)
+ Customize the Client behavior with a variety of options, overriding the default settings.
+
+* **[Certificate](../client-api/setting-up-authentication-and-authorization.mdx)** (optional)
+ X.509 certificate used to authenticate the client to the RavenDB server
+
+After setting the above configurations as necessary, call `.Initialize()` to begin using the Document Store.
+
+
+The Document Store is immutable - all above configuration are frozen upon calling .Initialize().
+Create a new document store object if you need different default configuration values.
+
+
+## Certificate Disposal
+
+Starting with RavenDB `6.x`, disposing of a store automatically removes any X509Certificate2 certificate installed for
+it, to [prevent the accumulation of unneeded certificate files](https://snede.net/the-most-dangerous-constructor-in-net/).
+
+To **disable** the automatic disposal of certificates, please use the
+[DisposeCertificate](../client-api/configuration/conventions.mdx#disposecertificate) convention.
+
+
+
+{`// Set conventions as necessary (optional)
+Conventions =
+\{
+ // Disable the automatic disposal of certificates when the store is disposed of
+ DisposeCertificate = false
+\},
+`}
+
+
+
+
+
+## Creating a Document Store - Example
+
+This example demonstrates how to implement the singleton pattern in the initialization of a Document Store, as well as how to set initial
+default configurations.
+
+
+
+{`// The \`DocumentStoreHolder\` class holds a single Document Store instance.
+public class DocumentStoreHolder
+\{
+ // Use Lazy to initialize the document store lazily.
+ // This ensures that it is created only once - when first accessing the public \`Store\` property.
+ private static Lazy store = new Lazy(CreateStore);
+
+ public static IDocumentStore Store => store.Value;
+
+ private static IDocumentStore CreateStore()
+ \{
+ IDocumentStore store = new DocumentStore()
+ \{
+ // Define the cluster node URLs (required)
+ Urls = new[] \{ "http://your_RavenDB_cluster_node",
+ /*some additional nodes of this cluster*/ \},
+
+ // Set conventions as necessary (optional)
+ Conventions =
+ \{
+ MaxNumberOfRequestsPerSession = 10,
+ UseOptimisticConcurrency = true
+ \},
+
+ // Define a default database (optional)
+ Database = "your_database_name",
+
+ // Define a client certificate (optional)
+ Certificate = new X509Certificate2("C:\\\\path_to_your_pfx_file\\\\cert.pfx"),
+
+ // Initialize the Document Store
+ \}.Initialize();
+
+ return store;
+ \}
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/_creating-document-store-java.mdx b/versioned_docs/version-7.1/client-api/_creating-document-store-java.mdx
new file mode 100644
index 0000000000..2eace8873c
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/_creating-document-store-java.mdx
@@ -0,0 +1,62 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+To create an instance of the `DocumentStore` you need to specify a list of URL addresses that point to RavenDB server nodes.
+
+
+Do not open a `DocumentStore` using URL addresses that point to nodes outside your cluster.
+
+
+
+
+{`try (IDocumentStore store = new DocumentStore( new String[]\{ "http://localhost:8080" \}, "Northwind")) \{
+ store.initialize();
+
+
+\}
+`}
+
+
+
+This will instantiate a communication channel between your application and the local RavenDB server instance.
+
+## Initialization
+
+To be able to work on the `DocumentStore`, you will have to call the `initialize` method to get the fully initialized instance of `IDocumentStore`.
+
+
+
+The conventions are frozen after `DocumentStore` initialization so they need to be set before `initialize` is called.
+
+
+
+## Singleton
+
+Because the document store is a heavyweight object, there should only be one instance created per application (singleton). The document store is a thread safe object and its typical
+initialization looks like the following:
+
+
+
+{`public static class DocumentStoreHolder \{
+
+ private static IDocumentStore store;
+
+ static \{
+ store = new DocumentStore(new String[]\{ "http://localhost:8080" \}, "Northwind");
+ \}
+
+ public static IDocumentStore getStore() \{
+ return store;
+ \}
+\}
+`}
+
+
+
+
+If you use more than one instance of `DocumentStore` you should dispose it after use.
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/_creating-document-store-nodejs.mdx b/versioned_docs/version-7.1/client-api/_creating-document-store-nodejs.mdx
new file mode 100644
index 0000000000..53a6db4961
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/_creating-document-store-nodejs.mdx
@@ -0,0 +1,57 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+To create an instance of the `DocumentStore` you need to specify a list of URL addresses that point to RavenDB server nodes.
+
+
+
+{`new DocumentStore(urls, [database], [authOptions]);
+`}
+
+
+
+
+Do not open a `DocumentStore` using URL addresses that point to nodes outside your cluster.
+
+
+
+
+{`const store = new DocumentStore(["http://localhost:8080"], "Northwind");
+store.initialize();
+`}
+
+
+
+The above snippet is going to instantiate a communication channel between your application and the local RavenDB server instance.
+
+## Initialization
+
+A `DocumentStore` instance must be initialized before use by calling the `.initialize()` method.
+
+
+
+After `DocumentStore` initialization, the conventions are frozen - modification attempts are going to result with error. Conventions need to be set *before* `.initialize()` is called.
+
+
+
+## Singleton
+
+Because the document store is a heavyweight object, there should only be one instance created per application (a singleton - simple to achieve in Node.js by wrapping it in a module). Typical initialization of a document store looks as follows:
+
+
+
+{`// documentStoreHolder.js
+const store = new DocumentStore("http://localhost:8080", "Northwind");
+store.initialize();
+export \{ store as documentStore \};
+`}
+
+
+
+
+If you use more than one instance of `DocumentStore`, you should dispose it after use by calling its `.dispose()` method.
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/_net-client-versions-csharp.mdx b/versioned_docs/version-7.1/client-api/_net-client-versions-csharp.mdx
new file mode 100644
index 0000000000..50a112298e
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/_net-client-versions-csharp.mdx
@@ -0,0 +1,16 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+.NET client is released for `netstandard2.0` and `netcoreapp2.1` targets and works on 32 and 64 bit platforms.
+
+## netstandard2.0
+
+This target allows you to create applications for **.NET Framework 4.6.1+, .NET Core 2.0+ and UWP (Universal Windows Platform) 10.1**.
+
+## netcoreapp2.1
+
+This target allows you to create applications for **.NET Core 2.1+**.
+
+
diff --git a/versioned_docs/version-7.1/client-api/_setting-up-authentication-and-authorization-csharp.mdx b/versioned_docs/version-7.1/client-api/_setting-up-authentication-and-authorization-csharp.mdx
new file mode 100644
index 0000000000..0bfa4f8be3
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/_setting-up-authentication-and-authorization-csharp.mdx
@@ -0,0 +1,47 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* **Authentication and authorization** are based on [Client X.509 Certificates](../server/security/authorization/security-clearance-and-permissions.mdx).
+
+* When your RavenDB instance runs on HTTPS, the server has a **Server Certificate** loaded.
+Your application must use a **Client Certificate** in order to access this secure server.
+
+* Obtain a Client Certificate from your cluster admin.
+The Client Certificate is generated by the admin from the [Studio](../server/security/authentication/certificate-management.mdx).
+
+* The security clearance (authorization level) for the generated Client Certificate is set during the process of generating the
+certificate.
+
+* Pass your Client Certificate to the Document Store before initialization, as shown in the example code
+[below](../client-api/setting-up-authentication-and-authorization.mdx#example---initializing-document-store-with-a-client-certificate).
+The server will use this certificate to authenticate the client when connection is established.
+
+
+## Example - Initializing Document Store With a Client Certificate
+
+
+
+{`// Load a X.509 certificate
+X509Certificate2 clientCertificate = new X509Certificate2("C:\\\\path_to_your_pfx_file\\\\cert.pfx");
+
+using (IDocumentStore store = new DocumentStore()
+\{
+ // Pass your certificate to the \`Certificate\` property
+ Certificate = clientCertificate,
+ Database = "your_database_name",
+ Urls = new[] \{"https://your_RavenDB_server_URL"\}
+\}.Initialize())
+\{
+ // Do your work here
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/_setting-up-authentication-and-authorization-java.mdx b/versioned_docs/version-7.1/client-api/_setting-up-authentication-and-authorization-java.mdx
new file mode 100644
index 0000000000..6db07d9a8a
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/_setting-up-authentication-and-authorization-java.mdx
@@ -0,0 +1,35 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+Authentication and authorization is based on [client X.509 certificates](../server/security/authorization/security-clearance-and-permissions.mdx).
+
+The `certificate` property allows you to pass a certificate which will be used by the RavenDB client to connect to a server.
+
+
+If your RavenDB instance is running on 'https', then your application has to use a client certificate in order to be able to access the server. You can find more information [here](../server/security/overview.mdx).
+
+
+## Example
+
+
+
+{`// load certificate
+// pem file should contain both public and private key
+KeyStore clientStore = CertificateUtils.createKeystore("c:\\\\ravendb\\\\app.client.certificate.pem");
+
+try (DocumentStore store = new DocumentStore()) \{
+ store.setCertificate(clientStore);
+ store.setDatabase("Northwind");
+ store.setUrls(new String[]\{ "https://my_secured_raven" \});
+
+ store.initialize();
+
+ // do your work here
+\}
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/_setting-up-authentication-and-authorization-nodejs.mdx b/versioned_docs/version-7.1/client-api/_setting-up-authentication-and-authorization-nodejs.mdx
new file mode 100644
index 0000000000..f0adbc80b3
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/_setting-up-authentication-and-authorization-nodejs.mdx
@@ -0,0 +1,36 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+Authentication and authorization is based on [client X.509 certificates](../server/security/authorization/security-clearance-and-permissions.mdx).
+
+The authentication options argument in `DocumentStore` constructor property allows you to pass a certificate which will be used by the RavenDB client to connect to a server.
+
+
+If your RavenDB server instance is served using `https`, then your application is required to use a client certificate in order to be able to access the server. You can find more information [here](../server/security/overview.mdx).
+
+
+## Example
+
+
+
+{`import \{ DocumentStore \} from "ravendb";
+import * as fs from "fs";
+
+// load certificate and prepare authentication options
+const authOptions = \{
+ certificate: fs.readFileSync("C:\\\\ravendb\\\\client-cert.pfx"),
+ type: "pfx", // or "pem"
+ password: "my passphrase"
+\};
+
+const store = new DocumentStore([ "https://my_secured_raven" ], "Northwind", authOptions);
+store.initialize();
+
+// proceed with your work here
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/_setting-up-default-database-csharp.mdx b/versioned_docs/version-7.1/client-api/_setting-up-default-database-csharp.mdx
new file mode 100644
index 0000000000..6cb187a222
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/_setting-up-default-database-csharp.mdx
@@ -0,0 +1,83 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+
+* A **default database** can be set in the Document Store.
+The default database is used when accessing the Document Store methods without explicitly specifying a database.
+
+* You can pass a different database when accessing the Document Store methods.
+This database will override the default database for that method action only.
+The default database value itself will Not change.
+
+* When accessing the Document Store methods, an exception will be thrown if a default database is Not set and if No other database was
+explicitly passed.
+
+* In this page:
+ * [Example - Without a Default Database](../client-api/setting-up-default-database.mdx#example---without-a-default-database)
+ * [Example - With a Default Database](../client-api/setting-up-default-database.mdx#example---with-a-default-database)
+
+## Example - Without a Default Database
+
+
+
+{`using (IDocumentStore store = new DocumentStore
+\{
+ Urls = new[] \{ "http://your_RavenDB_server_URL" \}
+ // Default database is not set
+\}.Initialize())
+\{
+ // Specify the 'Northwind' database when opening a Session
+ using (IDocumentSession session = store.OpenSession(database: "NorthWind"))
+ \{
+ // Session will operate on the 'Northwind' database
+ \}
+
+ // Specify the 'Northwind' database when sending an Operation
+ store.Maintenance.ForDatabase("Northwind").Send(new DeleteIndexOperation("NorthWindIndex"));
+\}
+`}
+
+
+
+
+
+## Example - With a Default Database
+
+The default database is defined in the Document Store's `Database` property.
+
+
+{`using (IDocumentStore store = new DocumentStore
+\{
+ Urls = new[] \{ "http://your_RavenDB_server_URL" \},
+ // Default database is set to 'Northwind'
+ Database = "Northwind"
+\}.Initialize())
+\{
+ // Using the default database
+ using (IDocumentSession northwindSession = store.OpenSession())
+ \{
+ // Session will operate on the default 'Northwind' database
+ \}
+
+ // Operation for default database
+ store.Maintenance.Send(new DeleteIndexOperation("NorthWindIndex"));
+
+ // Specify the 'AdventureWorks' database when opening a Session
+ using (IDocumentSession adventureWorksSession = store.OpenSession(database: "AdventureWorks"))
+ \{
+ // Session will operate on the specifed 'AdventureWorks' database
+ \}
+
+ // Specify the 'AdventureWorks' database when sending an Operation
+ store.Maintenance.ForDatabase("AdventureWorks").Send(new DeleteIndexOperation("AdventureWorksIndex"));
+\}
+`}
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/_setting-up-default-database-java.mdx b/versioned_docs/version-7.1/client-api/_setting-up-default-database-java.mdx
new file mode 100644
index 0000000000..5af6eeba84
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/_setting-up-default-database-java.mdx
@@ -0,0 +1,66 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+`database` property allows you to setup a default database for a `DocumentStore`. Implication of setting up a default database is that each time you access [operations](../client-api/operations/what-are-operations.mdx) or create a [session](../client-api/session/what-is-a-session-and-how-does-it-work.mdx) without explicitly passing database on which they should operate on then default database is assumed.
+
+## Example I
+
+
+
+{`// without specifying \`database\`
+// we will need to specify the database in each action
+// if no database is passed explicitly we will get an exception
+
+try (DocumentStore store = new DocumentStore()) \{
+ store.setUrls(new String[]\{ "http://localhost:8080" \});
+ store.initialize();
+
+ try (IDocumentSession session = store.openSession("Northwind")) \{
+ // ...
+ \}
+
+ CompactSettings compactSettings = new CompactSettings();
+ compactSettings.setDatabaseName("Northwind");
+ store.maintenance().server().send(new CompactDatabaseOperation(compactSettings));
+\}
+`}
+
+
+
+## Example II
+
+
+
+{`// when \`database\` is set to \`Northwind\`
+// created \`operations\` or opened \`sessions\`
+// will work on \`Northwind\` database by default
+// if no database is passed explicitly
+try (DocumentStore store = new DocumentStore(new String[]\{ "http://localhost:8080" \}, "Northwind")) \{
+ store.initialize();
+
+ try (IDocumentSession northwindSession = store.openSession()) \{
+ // ...
+ \}
+
+ store.maintenance().send(new DeleteIndexOperation("NorthwindIndex"));
+
+
+ try (IDocumentSession adventureWorksSession = store.openSession("AdventureWorks")) \{
+ // ...
+ \}
+
+ store.maintenance().forDatabase("AdventureWorks").send(new DeleteIndexOperation("AdventureWorksIndex"));
+\}
+`}
+
+
+
+## Remarks
+
+
+By default value of `database` property in `DocumentStore` is `null` which means that in any actions that need a database name we will have to specify the database.
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/_setting-up-default-database-nodejs.mdx b/versioned_docs/version-7.1/client-api/_setting-up-default-database-nodejs.mdx
new file mode 100644
index 0000000000..8dc32d7ea5
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/_setting-up-default-database-nodejs.mdx
@@ -0,0 +1,67 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+`database` property allows you to setup a default database for a `DocumentStore`. Implication of setting up a default database is that each time you access [operations](../client-api/operations/what-are-operations.mdx) or create a [session](../client-api/session/what-is-a-session-and-how-does-it-work.mdx) without explicitly passing database on which they should operate on then default database is assumed.
+
+## Example I
+
+
+
+{`// without specifying \`database\`
+// we will need to specify the database in each action
+// if no database is passed explicitly we will get an error
+
+const store = new DocumentStore([ "http://localhost:8080" ]);
+store.initialize();
+
+\{
+ const session = store.openSession("Northwind");
+ // ...
+\}
+
+const compactSettings = \{ databaseName: "Northwind" \};
+await store.maintenance.server.send(
+ new CompactDatabaseOperation(compactSettings));
+`}
+
+
+
+## Example II
+
+
+
+{`// when \`database\` is set to \`Northwind\`
+// created \`operations\` or opened \`sessions\`
+// will work on \`Northwind\` database by default
+// if no database is passed explicitly
+const store = new DocumentStore("http://localhost:8080", "Northwind");
+store.initialize();
+
+\{
+ const northwindSession = store.openSession();
+ // ...
+\}
+
+await store.maintenance.send(
+ new DeleteIndexOperation("NorthwindIndex"));
+
+\{
+ const adventureWorksSession = store.openSession("AdventureWorks");
+ // ...
+\}
+
+await store.maintenance.forDatabase("AdventureWorks")
+ .send(new DeleteIndexOperation("AdventureWorksIndex"));
+`}
+
+
+
+## Remarks
+
+
+By default value of `database` property in `DocumentStore` is `null` which means that in any action requiring a database name, we will have to specify the database.
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/_what-is-a-document-store-csharp.mdx b/versioned_docs/version-7.1/client-api/_what-is-a-document-store-csharp.mdx
new file mode 100644
index 0000000000..b3e54156d7
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/_what-is-a-document-store-csharp.mdx
@@ -0,0 +1,32 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* The **Document Store** is the main Client API object which establishes and manages the communication between your client application and a [RavenDB cluster](../server/clustering/overview.mdx).
+Communication is done via HTTP requests.
+
+* The Document Store holds the [Cluster Topology](../server/clustering/rachis/cluster-topology.mdx), the [Authentication Certificate](../client-api/setting-up-authentication-and-authorization.mdx),
+and any configurations & customizations that you may have applied.
+
+* Caching is built in. All requests to the server(s) and their responses are cached within the Document Store.
+
+* A single instance of the Document Store ([Singleton Pattern](https://csharpindepth.com/articles/Singleton)) should be created per cluster per the lifetime of your application.
+
+* The Document Store is thread safe - implemented in a thread safe manner.
+
+* The Document Store exposes methods to perform operations such as:
+ * [Session](../client-api/session/what-is-a-session-and-how-does-it-work.mdx) - Use the Session object to perform operations on a specific database
+ * [Operations](../client-api/operations/what-are-operations.mdx) - Manage the server with a set of low level operation commands
+ * [Bulk insert](../client-api/bulk-insert/how-to-work-with-bulk-insert-operation.mdx) - Useful when inserting a large amount of data
+ * [Conventions](../client-api/configuration/conventions.mdx) - Customize the Client API behavior
+ * [Changes API](../client-api/changes/what-is-changes-api.mdx) - Receive messages from the server
+ * [Aggressive caching](../client-api/how-to/setup-aggressive-caching.mdx) - Configure caching behavior
+ * [Events](../client-api/session/how-to/subscribe-to-events.mdx) - Perform custom actions in response to the Session's operations
+ * [Data Subscriptions](../client-api/data-subscriptions/what-are-data-subscriptions.mdx) - Define & manage data processing on the client side
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/_what-is-a-document-store-java.mdx b/versioned_docs/version-7.1/client-api/_what-is-a-document-store-java.mdx
new file mode 100644
index 0000000000..476a85ea28
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/_what-is-a-document-store-java.mdx
@@ -0,0 +1,23 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+A document store is our main client API object which establishes and manages the connection channel between an application and a database instance.
+It acts as the connection manager and also exposes methods to perform all operations which you can run against an associated server instance.
+
+The document store object has a list of URL addresses that points to RavenDB server nodes.
+
+* `DocumentStore` acts against a remote server via HTTP requests, implementing a common `IDocumentStore` interface
+
+The document store ensures access to the following client API features:
+
+* [Session](../client-api/session/what-is-a-session-and-how-does-it-work.mdx)
+* [Operations](../client-api/operations/what-are-operations.mdx)
+* [Conventions](../client-api/configuration/conventions.mdx)
+* [Events](../client-api/session/how-to/subscribe-to-events.mdx)
+* [Bulk insert](../client-api/bulk-insert/how-to-work-with-bulk-insert-operation.mdx)
+* [Changes API](../client-api/changes/what-is-changes-api.mdx)
+* [Aggressive cache](../client-api/how-to/setup-aggressive-caching.mdx)
+
+
diff --git a/versioned_docs/version-7.1/client-api/_what-is-a-document-store-nodejs.mdx b/versioned_docs/version-7.1/client-api/_what-is-a-document-store-nodejs.mdx
new file mode 100644
index 0000000000..41fc04ec72
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/_what-is-a-document-store-nodejs.mdx
@@ -0,0 +1,22 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+A document store is our main client API object which establishes and manages the connection channel between an application and a database instance.
+It acts as the connection manager and also exposes methods to perform all operations which you can run against an associated server instance.
+
+The document store object has a list of URL addresses that points to RavenDB server nodes.
+
+* `DocumentStore` acts against a remote server via HTTP requests
+
+The document store ensures access to the following client API features:
+
+* [Session](../client-api/session/what-is-a-session-and-how-does-it-work.mdx)
+* [Operations](../client-api/operations/what-are-operations.mdx)
+* [Conventions](../client-api/configuration/conventions.mdx)
+* [Events](../client-api/session/how-to/subscribe-to-events.mdx)
+* [Bulk insert](../client-api/bulk-insert/how-to-work-with-bulk-insert-operation.mdx)
+* [Changes API](../client-api/changes/what-is-changes-api.mdx)
+
+
diff --git a/versioned_docs/version-7.1/client-api/bulk-insert/_category_.json b/versioned_docs/version-7.1/client-api/bulk-insert/_category_.json
new file mode 100644
index 0000000000..2fea752c6b
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/bulk-insert/_category_.json
@@ -0,0 +1,4 @@
+{
+ "position": 8,
+ "label": Bulk Insert,
+}
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/bulk-insert/_how-to-work-with-bulk-insert-operation-csharp.mdx b/versioned_docs/version-7.1/client-api/bulk-insert/_how-to-work-with-bulk-insert-operation-csharp.mdx
new file mode 100644
index 0000000000..d1b6becbca
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/bulk-insert/_how-to-work-with-bulk-insert-operation-csharp.mdx
@@ -0,0 +1,206 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* `BulkInsert` is useful when inserting a large quantity of data from the client to the server.
+* It is an optimized time-saving approach with a few
+ [limitations](../../client-api/bulk-insert/how-to-work-with-bulk-insert-operation.mdx#limitations)
+ like the possibility that interruptions will occur during the operation.
+
+In this page:
+
+* [Syntax](../../client-api/bulk-insert/how-to-work-with-bulk-insert-operation.mdx#syntax)
+* [`BulkInsertOperation`](../../client-api/bulk-insert/how-to-work-with-bulk-insert-operation.mdx#bulkinsertoperation)
+ * [Methods](../../client-api/bulk-insert/how-to-work-with-bulk-insert-operation.mdx#methods)
+ * [Limitations](../../client-api/bulk-insert/how-to-work-with-bulk-insert-operation.mdx#limitations)
+ * [Example](../../client-api/bulk-insert/how-to-work-with-bulk-insert-operation.mdx#example)
+* [`BulkInsertOptions`](../../client-api/bulk-insert/how-to-work-with-bulk-insert-operation.mdx#bulkinsertoptions)
+ * [`CompressionLevel`](../../client-api/bulk-insert/how-to-work-with-bulk-insert-operation.mdx#section)
+ * [`SkipOverwriteIfUnchanged`](../../client-api/bulk-insert/how-to-work-with-bulk-insert-operation.mdx#section-1)
+
+
+
+## Syntax
+
+
+
+{`BulkInsertOperation BulkInsert(string database = null, CancellationToken token = default);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **database** | `string` | The name of the database to perform the bulk operation on. If `null`, the DocumentStore `Database` will be used. |
+| **token** | `CancellationToken` | Cancellation token used to halt the worker operation. |
+
+| Return Value | |
+| ------------- | ----- |
+| `BulkInsertOperation`| Instance of `BulkInsertOperation` used for interaction. |
+
+
+{`BulkInsertOperation BulkInsert(string database, BulkInsertOptions options, CancellationToken token = default);
+`}
+
+
+
+| Parameters | Type | Description |
+| ------------- | ------------- | ----- |
+| **database** | `string` | The name of the database to perform the bulk operation on. If `null`, the DocumentStore `Database` will be used. |
+| **options** | `BulkInsertOptions` | [Options](../../client-api/bulk-insert/how-to-work-with-bulk-insert-operation.mdx#bulkinsertoptions) to configure BulkInsert. |
+| **token** | `CancellationToken` | Cancellation token used to halt the worker operation. |
+
+| Return Value | |
+| ------------- | ----- |
+| `BulkInsertOperation`| Instance of `BulkInsertOperation` used for interaction. |
+
+
+{`BulkInsertOperation BulkInsert(BulkInsertOptions options, CancellationToken token = default);
+`}
+
+
+
+| Parameters | Type | Description |
+| ------------- | ------------- | ----- |
+| **options** | `BulkInsertOptions` | [Options](../../client-api/bulk-insert/how-to-work-with-bulk-insert-operation.mdx#bulkinsertoptions) to configure BulkInsert. |
+| **token** | `CancellationToken` | Cancellation token used to halt the worker operation. |
+
+| Return Value | |
+| ------------- | ----- |
+| `BulkInsertOperation`| Instance of `BulkInsertOperation` used for interaction. |
+
+
+
+## `BulkInsertOperation`
+
+The following methods can be used when creating a bulk insert.
+
+### Methods
+
+| Signature | Description |
+| ----------| ----- |
+| **void Abort()** | Abort the operation |
+| **void Store(object entity, IMetadataDictionary metadata = null)** | Store the entity, identifier will be generated automatically on client-side. Optional, metadata can be provided for the stored entity. |
+| **void Store(object entity, string id, IMetadataDictionary metadata = null)** | Store the entity, with `id` parameter to explicitly declare the entity identifier. Optional, metadata can be provided for the stored entity.|
+| **void StoreAsync(object entity, IMetadataDictionary metadata = null)** | Store the entity in an async manner, identifier will be generated automatically on client-side. Optional, metadata can be provided for the stored entity. |
+| **void StoreAsync(object entity, string id, IMetadataDictionary metadata = null)** | Store the entity in an async manner, with `id` parameter to explicitly declare the entity identifier. Optional, metadata can be provided for the stored entity.|
+| **void Dispose()** | Dispose of an object |
+| **void DisposeAsync()** | Dispose of an object in an async manner |
+
+### Limitations
+
+* BulkInsert is designed to efficiently push large volumes of data.
+ Data is therefore streamed and **processed by the server in batches**.
+ Each batch is fully transactional, but there are no transaction guarantees between the batches
+ and the operation as a whole is non-transactional.
+ If the bulk insert operation is interrupted mid-way, some of your data might be persisted
+ on the server while some of it might not.
+ * Make sure that your logic accounts for the possibility of an interruption that would cause
+ some of your data not to persist on the server yet.
+ * If the operation was interrupted and you choose to re-insert the whole dataset in a new
+ operation, you can set
+ [SkipOverwriteIfUnchanged](../../client-api/bulk-insert/how-to-work-with-bulk-insert-operation.mdx#section-1)
+ as `true` so the operation will overwrite existing documents only if they changed since
+ the last insertion.
+ * **If you need full transactionality**, using [session](../../client-api/session/what-is-a-session-and-how-does-it-work.mdx)
+ may be a better option.
+ Note that if `session` is used all of the data is processed in a single transaction, so the
+ server must have sufficient resources to handle the entire data set included in the transaction.
+* Bulk insert is **not thread-safe**.
+ A single bulk insert should not be accessed concurrently.
+ * Using multiple bulk inserts concurrently on the same client is supported.
+ * Usage in an async context is also supported.
+
+### Example
+
+#### Create bulk insert
+
+Here we create a bulk insert operation and insert a million documents of type `Employee`:
+
+
+
+{`using (BulkInsertOperation bulkInsert = store.BulkInsert())
+{
+ for (int i = 0; i < 1000 * 1000; i++)
+ {
+ bulkInsert.Store(new Employee
+ {
+ FirstName = "FirstName #" + i,
+ LastName = "LastName #" + i
+ });
+ }
+}
+`}
+
+
+
+
+{`BulkInsertOperation bulkInsert = null;
+try
+{
+ bulkInsert = store.BulkInsert();
+ for (int i = 0; i < 1000 * 1000; i++)
+ {
+ await bulkInsert.StoreAsync(new Employee
+ {
+ FirstName = "FirstName #" + i,
+ LastName = "LastName #" + i
+ });
+ }
+}
+finally
+{
+ if (bulkInsert != null)
+ {
+ await bulkInsert.DisposeAsync().ConfigureAwait(false);
+ }
+}
+`}
+
+
+
+
+
+
+## `BulkInsertOptions`
+
+The following options can be configured for BulkInsert.
+
+#### `CompressionLevel`:
+
+| Parameter | Type | Description |
+| ------------- | ------------- | ----- |
+| **Optimal** | `string` | Compression level to be used when compressing static files. |
+| **Fastest** (Default)| `string` | Compression level to be used when compressing HTTP responses with `GZip` or `Deflate`. |
+| **NoCompression** | `string` | Does not compress. |
+
+
+For RavenDB versions up to `6.2`, bulk-insert compression is Disabled (`NoCompression`) by default.
+For RavenDB versions from `7.0` on, bulk-insert compression is Enabled (set to `Fastest`) by default.
+
+
+#### `SkipOverwriteIfUnchanged`:
+
+Use this option to avoid overriding documents when the inserted document and the existing one are similar.
+
+Enabling this flag can exempt the server of many operations triggered by document-change,
+like re-indexation and subscription or ETL-tasks updates.
+There is a slight potential cost in the additional comparison that has to be made between
+the existing documents and the ones that are being inserted.
+
+
+
+{`using (var bulk = store.BulkInsert(new BulkInsertOptions
+\{
+ SkipOverwriteIfUnchanged = true
+\}));
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/bulk-insert/_how-to-work-with-bulk-insert-operation-java.mdx b/versioned_docs/version-7.1/client-api/bulk-insert/_how-to-work-with-bulk-insert-operation-java.mdx
new file mode 100644
index 0000000000..112d1b54b9
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/bulk-insert/_how-to-work-with-bulk-insert-operation-java.mdx
@@ -0,0 +1,69 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+One of the features that is particularly useful when inserting large amount of data is `bulk inserting`.
+This is an optimized time-saving approach with few drawbacks that will be described later.
+
+## Syntax
+
+
+
+{`BulkInsertOperation bulkInsert();
+
+BulkInsertOperation bulkInsert(String database);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **database** | `String` | The name of the database to perform the bulk operation on. If `null`, the DocumentStore `Database` will be used. |
+
+| Return Value | |
+| ------------- | ----- |
+| `BulkInsertOperation`| Instance of BulkInsertOperation used for interaction. |
+
+# BulkInsertOperation
+
+### Methods
+
+| Signature | Description |
+| ----------| ----- |
+| **void abort()** | Abort the operation |
+| **void store(Object entity, IMetadataDictionary metadata = null)** | store the entity, identifier will be generated automatically on client-side. Optional, metadata can be provided for the stored entity. |
+| **void store(Object entity, String id, IMetadataDictionary metadata = null)** | store the entity, with `id` parameter to explicitly declare the entity identifier. Optional, metadata can be provided for the stored entity.|
+| **void close()** | Close an object |
+
+## Limitations
+
+There are a couple limitations to the API:
+
+* The bulk insert operation is broken into batches, each batch is treated in its own transaction
+ so the whole operation isn't treated under a single transaction.
+* Bulk insert is not thread safe, a single bulk insert should not be accessed concurrently.
+ The use of multiple bulk inserts, on the same client, concurrently is supported also the
+ use in an async context is supported.
+
+## Example
+
+### Create bulk insert
+
+Here we create a bulk insert operation and insert a million documents of type Employee
+
+
+
+{`try (BulkInsertOperation bulkInsert = store.bulkInsert()) \{
+ for (int i = 0; i < 1_000_000; i++) \{
+ Employee employee = new Employee();
+ employee.setFirstName("FirstName #" + i);
+ employee.setLastName("LastName #" + i);
+ bulkInsert.store(employee);
+ \}
+\}
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/bulk-insert/_how-to-work-with-bulk-insert-operation-nodejs.mdx b/versioned_docs/version-7.1/client-api/bulk-insert/_how-to-work-with-bulk-insert-operation-nodejs.mdx
new file mode 100644
index 0000000000..aed647b7e9
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/bulk-insert/_how-to-work-with-bulk-insert-operation-nodejs.mdx
@@ -0,0 +1,65 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+One of the features that is particularly useful when inserting large amount of data is `bulk inserting`.
+This is an optimized time-saving approach with few drawbacks that will be described later.
+
+## Syntax
+
+
+
+{`documentStore.bulkInsert([database]);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **database** | `string` | The name of the database to perform the bulk operation on. If `null`, the DocumentStore `Database` will be used. |
+
+| Return Value | |
+| ------------- | ----- |
+| `BulkInsertOperation` | Instance of `BulkInsertOperation` used for interaction. |
+
+# `BulkInsertOperation`
+
+### Methods
+
+| Signature | Description |
+| ----------| ----- |
+| **async abort()** | Aborts the bulk insert operation. Returns a `Promise`. |
+| **async store(entity, [metadata])** | store the entity, identifier will be generated automatically on client-side. Optional, metadata can be provided for the stored entity. Returns a `Promise`. |
+| **async store(entity, id, [metadata])** | store the entity, with `id` parameter to explicitly declare the entity identifier. Optional, metadata can be provided for the stored entity. Returns a `Promise`. |
+| **async finish()** | Finish bulk insert and flush everything to the server. Returns a `Promise`. |
+
+## Limitations
+
+There are a couple limitations to the API:
+
+* The bulk insert operation is broken into batches, each batch is treated in its own transaction
+ so the whole operation isn't treated under a single transaction.
+
+## Example
+
+### Create bulk insert
+
+Here we create a bulk insert operation and insert a million documents of type Employee
+
+
+
+{`\{
+ const bulkInsert = documentStore.bulkInsert();
+ for (let i = 0; i < 1000000; i++) \{
+ const employee = new Employee("FirstName #" + i, "LastName #" + i);
+ await bulkInsert.store(employee);
+ \}
+
+ await bulkInsert.finish();
+\}
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/bulk-insert/how-to-work-with-bulk-insert-operation.mdx b/versioned_docs/version-7.1/client-api/bulk-insert/how-to-work-with-bulk-insert-operation.mdx
new file mode 100644
index 0000000000..713efc57b9
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/bulk-insert/how-to-work-with-bulk-insert-operation.mdx
@@ -0,0 +1,44 @@
+---
+title: "Bulk Insert: How to Work With Bulk Insert Operation"
+hide_table_of_contents: true
+sidebar_label: How to Work With Bulk Insert Operation
+sidebar_position: 0
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import HowToWorkWithBulkInsertOperationCsharp from './_how-to-work-with-bulk-insert-operation-csharp.mdx';
+import HowToWorkWithBulkInsertOperationJava from './_how-to-work-with-bulk-insert-operation-java.mdx';
+import HowToWorkWithBulkInsertOperationNodejs from './_how-to-work-with-bulk-insert-operation-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/changes/_category_.json b/versioned_docs/version-7.1/client-api/changes/_category_.json
new file mode 100644
index 0000000000..6519a53760
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/changes/_category_.json
@@ -0,0 +1,4 @@
+{
+ "position": 10,
+ "label": Changes API,
+}
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/changes/_how-to-subscribe-to-counter-changes-csharp.mdx b/versioned_docs/version-7.1/client-api/changes/_how-to-subscribe-to-counter-changes-csharp.mdx
new file mode 100644
index 0000000000..8995f849e8
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/changes/_how-to-subscribe-to-counter-changes-csharp.mdx
@@ -0,0 +1,216 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+Following methods allow you to subscribe to counter changes:
+
+- [ForCounter](../../client-api/changes/how-to-subscribe-to-counter-changes.mdx#forcounter)
+- [ForCounterOfDocument](../../client-api/changes/how-to-subscribe-to-counter-changes.mdx#forcounterofdocument)
+- [ForCountersOfDocument](../../client-api/changes/how-to-subscribe-to-counter-changes.mdx#forcountersofdocument)
+- [ForAllCounters](../../client-api/changes/how-to-subscribe-to-counter-changes.mdx#forallcounters)
+
+## ForCounter
+
+Counter changes can be observed using `ForCounter` method. This will subscribe changes from all counters with a given name, no matter in what document counter was changed.
+
+### Syntax
+
+
+
+{`IChangesObservable ForCounter(string counterName);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **counterName** | string | Name of a counter to subscribe to. |
+
+| Return Value | |
+| ------------- | ----- |
+| IChangesObservable<[CounterChange](../../client-api/changes/how-to-subscribe-to-counter-changes.mdx#counterchange)> | Observable that allows to add subscriptions to counter notifications. |
+
+### Example
+
+
+
+{`IDisposable subscription = store
+ .Changes()
+ .ForCounter("Likes")
+ .Subscribe(
+ change =>
+ \{
+ switch (change.Type)
+ \{
+ case CounterChangeTypes.Increment:
+ // do something
+ break;
+ \}
+ \});
+`}
+
+
+
+
+
+## ForCounterOfDocument
+
+Specific counter changes of a given document can be observed using `ForCounterOfDocument` method.
+
+### Syntax
+
+
+
+{`IChangesObservable ForCounterOfDocument(string documentId, string counterName);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **documentId** | string | ID of a document to subscribe to. |
+| **counterName** | string | Name of a counter to subscribe to. |
+
+| Return Value | |
+| ------------- | ----- |
+| IChangesObservable<[CounterChange](../../client-api/changes/how-to-subscribe-to-counter-changes.mdx#counterchange)> | Observable that allows to add subscriptions to counter notifications. |
+
+### Example
+
+
+
+{`IDisposable subscription = store
+ .Changes()
+ .ForCounterOfDocument("companies/1-A", "Likes")
+ .Subscribe(
+ change =>
+ \{
+ switch (change.Type)
+ \{
+ case CounterChangeTypes.Increment:
+ // do something
+ break;
+ \}
+ \});
+`}
+
+
+
+
+
+## ForCountersOfDocument
+
+Counter changes of a specified document can be observed using `ForCountersOfDocument` method.
+
+### Syntax
+
+
+
+{`IChangesObservable ForCountersOfDocument(string documentId);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **documentId** | string | ID of a document to subscribe to. |
+
+| Return Value | |
+| ------------- | ----- |
+| IChangesObservable<[CounterChange](../../client-api/changes/how-to-subscribe-to-counter-changes.mdx#counterchange)> | Observable that allows to add subscriptions to counter notifications. |
+
+### Example
+
+
+
+{`IDisposable subscription = store
+ .Changes()
+ .ForCountersOfDocument("companies/1-A")
+ .Subscribe(
+ change =>
+ \{
+ switch (change.Type)
+ \{
+ case CounterChangeTypes.Increment:
+ // do something
+ break;
+ \}
+ \});
+`}
+
+
+
+
+
+## ForAllCounters
+
+Changes for all counters can be observed using `ForAllCounters` method.
+
+### Syntax
+
+
+
+{`IChangesObservable ForAllCounters();
+`}
+
+
+
+| Return Value | |
+| ------------- | ----- |
+| IChangesObservable<[CounterChange](../../client-api/changes/how-to-subscribe-to-counter-changes.mdx#counterchange)> | Observable that allows to add subscriptions to counter notifications. |
+
+### Example
+
+
+
+{`IDisposable subscription = store
+ .Changes()
+ .ForAllCounters()
+ .Subscribe(
+ change =>
+ \{
+ switch (change.Type)
+ \{
+ case CounterChangeTypes.Increment:
+ // do something
+ break;
+ \}
+ \});
+`}
+
+
+
+
+
+## CounterChange
+
+| Name | Type | Description |
+| ------------- | ------------- | ----- |
+| **Type** | [CounterChangeTypes](../../client-api/changes/how-to-subscribe-to-counter-changes.mdx#counterchangetypes) | Counter change type enum |
+| **Name** | string | Counter name |
+| **Value** | long | Counter value after the change |
+| **DocumentId** | string | Counter document identifier |
+| **ChangeVector** | string | Counter's ChangeVector|
+
+
+
+## CounterChangeTypes
+
+| Name | Value |
+| ---- | ----- |
+| **None** | `0` |
+| **Put** | `1` |
+| **Delete** | `2` |
+| **Increment** | `4` |
+
+
+
+## Remarks
+
+
+To get more method overloads, especially ones supporting **delegates**, please add the
+[System.Reactive.Core](https://www.nuget.org/packages/System.Reactive.Core/) package to your project.
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/changes/_how-to-subscribe-to-counter-changes-java.mdx b/versioned_docs/version-7.1/client-api/changes/_how-to-subscribe-to-counter-changes-java.mdx
new file mode 100644
index 0000000000..aa6cd872e6
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/changes/_how-to-subscribe-to-counter-changes-java.mdx
@@ -0,0 +1,197 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+Following methods allow you to subscribe to counter changes:
+
+- [ForCounter](../../client-api/changes/how-to-subscribe-to-counter-changes.mdx#forcounter)
+- [ForCounterOfDocument](../../client-api/changes/how-to-subscribe-to-counter-changes.mdx#forcounterofdocument)
+- [ForCountersOfDocument](../../client-api/changes/how-to-subscribe-to-counter-changes.mdx#forcountersofdocument)
+- [ForAllCounters](../../client-api/changes/how-to-subscribe-to-counter-changes.mdx#forallcounters)
+
+## ForCounter
+
+Counter changes can be observed using `forCounter` method. This will subscribe changes from all counters with a given name, no matter in what document counter was changed.
+
+### Syntax
+
+
+
+{`IChangesObservable forCounter(String counterName);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **counterName** | String | Name of a counter to subscribe to. |
+
+| Return Value | |
+| ------------- | ----- |
+| IChangesObservable<[CounterChange](../../client-api/changes/how-to-subscribe-to-counter-changes.mdx#counterchange)> | Observable that allows to add subscriptions to counter notifications. |
+
+### Example
+
+
+
+{`store
+ .changes()
+ .forCounter("likes")
+ .subscribe(Observers.create(change -> \{
+ switch (change.getType()) \{
+ case INCREMENT:
+ // do something ...
+ break;
+ \}
+ \}));
+`}
+
+
+
+
+
+## ForCounterOfDocument
+
+Specific counter changes of a given document can be observed using `forCounterOfDocument` method.
+
+### Syntax
+
+
+
+{`IChangesObservable forCounterOfDocument(String documentId, String counterName);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **documentId** | String | ID of a document to subscribe to. |
+| **counterName** | String | Name of a counter to subscribe to. |
+
+| Return Value | |
+| ------------- | ----- |
+| IChangesObservable<[CounterChange](../../client-api/changes/how-to-subscribe-to-counter-changes.mdx#counterchange)> | Observable that allows to add subscriptions to counter notifications. |
+
+### Example
+
+
+
+{`store
+ .changes()
+ .forCounterOfDocument("companies/1-A", "likes")
+ .subscribe(Observers.create(change -> \{
+ switch (change.getType()) \{
+ case INCREMENT:
+ // do something
+ break;
+ \}
+ \}));
+`}
+
+
+
+
+
+## ForCountersOfDocument
+
+Counter changes of a specified document can be observed using `forCountersOfDocument` method.
+
+### Syntax
+
+
+
+{`IChangesObservable forCountersOfDocument(String documentId);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **documentId** | String | ID of a document to subscribe to. |
+
+| Return Value | |
+| ------------- | ----- |
+| IChangesObservable<[CounterChange](../../client-api/changes/how-to-subscribe-to-counter-changes.mdx#counterchange)> | Observable that allows to add subscriptions to counter notifications. |
+
+### Example
+
+
+
+{`store
+ .changes()
+ .forCountersOfDocument("companies/1-A")
+ .subscribe(Observers.create(change -> \{
+ switch (change.getType()) \{
+ case INCREMENT:
+ // do something ...
+ break;
+ \}
+ \}));
+`}
+
+
+
+
+
+## ForAllCounters
+
+Changes for all counters can be observed using `forAllCounters` method.
+
+### Syntax
+
+
+
+{`IChangesObservable ForAllCounters();
+`}
+
+
+
+| Return Value | |
+| ------------- | ----- |
+| IChangesObservable<[CounterChange](../../client-api/changes/how-to-subscribe-to-counter-changes.mdx#counterchange)> | Observable that allows to add subscriptions to counter notifications. |
+
+### Example
+
+
+
+{`store
+ .changes()
+ .forAllCounters()
+ .subscribe(Observers.create(change -> \{
+ switch (change.getType()) \{
+ case INCREMENT:
+ // do something ...
+ break;
+ \}
+ \}));
+`}
+
+
+
+
+
+## CounterChange
+
+| Name | Type | Description |
+| ------------- | ------------- | ----- |
+| **Type** | [CounterChangeTypes](../../client-api/changes/how-to-subscribe-to-counter-changes.mdx#counterchangetypes) | Counter change type enum |
+| **Name** | String | Counter name |
+| **Value** | Long | Counter value after the change |
+| **DocumentId** | String | Counter document identifier |
+| **ChangeVector** | String | Counter's ChangeVector|
+
+
+
+## CounterChangeTypes
+
+| Name | Value |
+| ---- | ----- |
+| **NONE** | `0` |
+| **PUT** | `1` |
+| **DELETE** | `2` |
+| **INCREMENT** | `4` |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/changes/_how-to-subscribe-to-document-changes-csharp.mdx b/versioned_docs/version-7.1/client-api/changes/_how-to-subscribe-to-document-changes-csharp.mdx
new file mode 100644
index 0000000000..de87eb28f3
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/changes/_how-to-subscribe-to-document-changes-csharp.mdx
@@ -0,0 +1,215 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+Following methods allow you to subscribe to document changes:
+
+- [ForDocument](../../client-api/changes/how-to-subscribe-to-document-changes.mdx#fordocument)
+- [ForDocumentsInCollection](../../client-api/changes/how-to-subscribe-to-document-changes.mdx#fordocumentsincollection)
+- [ForDocumentsStartingWith](../../client-api/changes/how-to-subscribe-to-document-changes.mdx#fordocumentsstartingwith)
+- [ForAllDocuments](../../client-api/changes/how-to-subscribe-to-document-changes.mdx#foralldocuments)
+
+## ForDocument
+
+Single document changes can be observed using `ForDocument` method.
+
+### Syntax
+
+
+
+{`IChangesObservable ForDocument(string docId);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **docId** | string | ID of a document for which notifications will be processed. |
+
+| Return Value | |
+| ------------- | ----- |
+| IChangesObservable<[DocumentChange](../../client-api/changes/how-to-subscribe-to-document-changes.mdx#documentchange)> | Observable that allows to add subscriptions to notifications for given document. |
+
+### Example
+
+
+
+{`IDisposable subscription = store
+ .Changes()
+ .ForDocument("employees/1")
+ .Subscribe(
+ change =>
+ \{
+ switch (change.Type)
+ \{
+ case DocumentChangeTypes.Put:
+ // do something
+ break;
+ case DocumentChangeTypes.Delete:
+ // do something
+ break;
+ \}
+ \});
+`}
+
+
+
+
+
+## ForDocumentsInCollection
+
+To observe all document changes in particular collection use `ForDocumentInCollection` method. This method filters documents by `@collection` metadata property value.
+
+### Syntax
+
+
+
+{`IChangesObservable ForDocumentsInCollection(string collectionName);
+
+IChangesObservable ForDocumentsInCollection();
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **collectionName** | string | Name of document collection for which notifications will be processed. |
+
+| Return Value | |
+| ------------- | ----- |
+| IChangesObservable<[DocumentChange](../../client-api/changes/how-to-subscribe-to-document-changes.mdx#documentchange)> | Observable that allows to add subscriptions to notifications for given document collection name. |
+
+
+Overload with `TEntity` type uses `Conventions.GetCollectionName` to get collection name.
+
+
+### Example
+
+
+
+{`IDisposable subscription = store
+ .Changes()
+ .ForDocumentsInCollection()
+ .Subscribe(change => Console.WriteLine("\{0\} on document \{1\}", change.Type, change.Id));
+`}
+
+
+
+or
+
+
+
+{`string collectionName = store.Conventions.FindCollectionName(typeof(Employee));
+IDisposable subscription = store
+ .Changes()
+ .ForDocumentsInCollection(collectionName)
+ .Subscribe(change => Console.WriteLine("\{0\} on document \{1\}", change.Type, change.Id));
+`}
+
+
+
+
+
+## ForDocumentsStartingWith
+
+To observe all document changes for documents with ID that contains given prefix use `ForDocumentsStartingWith` method.
+
+### Syntax
+
+
+
+{`IChangesObservable ForDocumentsStartingWith(string docIdPrefix);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **docIdPrefix** | string | Document ID prefix for which notifications will be processed. |
+
+| Return Value | |
+| ------------- | ----- |
+| IChangesObservable<[DocumentChange](../../client-api/changes/how-to-subscribe-to-document-changes.mdx#documentchange)> | Observable that allows to add subscriptions to notifications for given document ID prefix. |
+
+### Example
+
+
+
+{`IDisposable subscription = store
+ .Changes()
+ .ForDocumentsStartingWith("employees/1") // employees/1, employees/10, employees/11, etc.
+ .Subscribe(change => Console.WriteLine("\{0\} on document \{1\}", change.Type, change.Id));
+`}
+
+
+
+
+
+## ForAllDocuments
+
+To observe all document changes use `ForAllDocuments` method.
+
+### Syntax
+
+
+
+{`IChangesObservable ForAllDocuments();
+`}
+
+
+
+| Return Value | |
+| ------------- | ----- |
+| IChangesObservable<[DocumentChange](../../client-api/changes/how-to-subscribe-to-document-changes.mdx#documentchange)> | Observable that allows to add subscriptions to notifications for all documents. |
+
+### Example
+
+
+
+{`IDisposable subscription = store
+ .Changes()
+ .ForAllDocuments() // employees/1, orders/1, customers/1, etc.
+ .Subscribe(change => Console.WriteLine("\{0\} on document \{1\}", change.Type, change.Id));
+`}
+
+
+
+
+
+## DocumentChange
+
+| Name | Type | Description |
+| ------------- | ------------- | ----- |
+| **Type** | [DocumentChangeTypes](../../client-api/changes/how-to-subscribe-to-document-changes.mdx#documentchangetypes) | Document change type enum |
+| **Id** | string | Document identifier |
+| **CollectionName** | string | Document's collection name |
+| **TypeName** | string | Type name |
+| **ChangeVector** | string | Document's ChangeVector|
+
+
+
+## DocumentChangeTypes
+
+| Name | Value |
+| ---- | ----- |
+| **None** | `0` |
+| **Put** | `1` |
+| **Delete** | `2` |
+| **BulkInsertStarted** | `4` |
+| **BulkInsertEnded** | `8` |
+| **BulkInsertError** | `16` |
+| **DeleteOnTombstoneReplication** | `32` |
+| **Conflict** | `64` |
+| **Common** | `Put & Delete` |
+
+
+
+## Remarks
+
+
+To get more method overloads, especially ones supporting **delegates**, please add the
+[System.Reactive.Core](https://www.nuget.org/packages/System.Reactive.Core/) package to your project.
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/changes/_how-to-subscribe-to-document-changes-java.mdx b/versioned_docs/version-7.1/client-api/changes/_how-to-subscribe-to-document-changes-java.mdx
new file mode 100644
index 0000000000..891331315d
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/changes/_how-to-subscribe-to-document-changes-java.mdx
@@ -0,0 +1,212 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+Following methods allow you to subscribe to document changes:
+
+- [forDocument](../../client-api/changes/how-to-subscribe-to-document-changes.mdx#fordocument)
+- [forDocumentsInCollection](../../client-api/changes/how-to-subscribe-to-document-changes.mdx#fordocumentsincollection)
+- [forDocumentsStartingWith](../../client-api/changes/how-to-subscribe-to-document-changes.mdx#fordocumentsstartingwith)
+- [forAllDocuments](../../client-api/changes/how-to-subscribe-to-document-changes.mdx#foralldocuments)
+
+## forDocument
+
+Single document changes can be observed using `forDocument` method.
+
+### Syntax
+
+
+
+{`IChangesObservable forDocument(String docId);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **docId** | String | ID of a document for which notifications will be processed. |
+
+| Return Value | |
+| ------------- | ----- |
+| IChangesObservable<[DocumentChange](../../client-api/changes/how-to-subscribe-to-document-changes.mdx#documentchange)> | Observable that allows to add subscriptions to notifications for given document. |
+
+### Example
+
+
+
+{`CleanCloseable subscription = store.changes()
+ .forDocument("employees/1")
+ .subscribe(Observers.create(change -> \{
+ switch (change.getType()) \{
+ case PUT:
+ // do something
+ break;
+ case DELETE:
+ // do something
+ break;
+ \}
+ \}));
+`}
+
+
+
+
+
+## forDocumentsInCollection
+
+To observe all document changes in particular collection use `forDocumentInCollection` method. This method filters documents by `@collection` metadata property value.
+
+### Syntax
+
+
+
+{`IChangesObservable forDocumentsInCollection(String collectionName);
+
+IChangesObservable forDocumentsInCollection(Class> clazz);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **collectionName** | String | Name of document collection for which notifications will be processed. |
+
+| Return Value | |
+| ------------- | ----- |
+| IChangesObservable<[DocumentChange](../../client-api/changes/how-to-subscribe-to-document-changes.mdx#documentchange)> | Observable that allows to add subscriptions to notifications for given document collection name. |
+
+
+Overload with `TEntity` type uses `conventions.GetCollectionName` to get collection name.
+
+
+### Example
+
+
+
+{`CleanCloseable subscription = store
+ .changes()
+ .forDocumentsInCollection(Employee.class)
+ .subscribe(Observers.create(change -> \{
+ System.out.println(change.getType() + " on document " + change.getId());
+ \}));
+`}
+
+
+
+or
+
+
+
+{`String collectionName = store.getConventions().getFindCollectionName().apply(Employee.class);
+CleanCloseable subscription = store
+ .changes()
+ .forDocumentsInCollection(collectionName)
+ .subscribe(Observers.create(change -> \{
+ System.out.println(change.getType() + " on document " + change.getId());
+ \}));
+`}
+
+
+
+
+
+## forDocumentsStartingWith
+
+To observe all document changes for documents with ID that contains given prefix use `forDocumentsStartingWith` method.
+
+### Syntax
+
+
+
+{`IChangesObservable forDocumentsStartingWith(String docIdPrefix);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **docIdPrefix** | String | Document ID prefix for which notifications will be processed. |
+
+| Return Value | |
+| ------------- | ----- |
+| IChangesObservable<[DocumentChange](../../client-api/changes/how-to-subscribe-to-document-changes.mdx#documentchange)> | Observable that allows to add subscriptions to notifications for given document ID prefix. |
+
+### Example
+
+
+
+{`CleanCloseable subscription = store
+ .changes()
+ .forDocumentsStartingWith("employees/1") // employees/1, employees/10, employees/11, etc.
+ .subscribe(Observers.create(change -> \{
+ System.out.println(change.getType() + " on document " + change.getId());
+ \}));
+`}
+
+
+
+
+
+## forAllDocuments
+
+To observe all document changes use `forAllDocuments` method.
+
+### Syntax
+
+
+
+{`IChangesObservable forAllDocuments();
+`}
+
+
+
+| Return Value | |
+| ------------- | ----- |
+| IChangesObservable<[DocumentChange](../../client-api/changes/how-to-subscribe-to-document-changes.mdx#documentchange)> | Observable that allows to add subscriptions to notifications for all documents. |
+
+### Example
+
+
+
+{`CleanCloseable subscription = store
+ .changes()
+ .forAllDocuments()
+ .subscribe(Observers.create(change -> \{
+ System.out.println(change.getType() + " on document " + change.getId());
+ \}));
+`}
+
+
+
+
+
+## DocumentChange
+
+| Name | Type | Description |
+| ------------- | ------------- | ----- |
+| **Type** | [DocumentChangeTypes](../../client-api/changes/how-to-subscribe-to-document-changes.mdx#documentchangetypes) | Document change type enum |
+| **Id** | String | Document identifier |
+| **CollectionName** | String | Document's collection name |
+| **TypeName** | String | Type name |
+| **ChangeVector** | String | Document's ChangeVector|
+
+
+
+## DocumentChangeTypes
+
+| Name |
+| ---- |
+| **NONE** |
+| **PUT** |
+| **DELETE** |
+| **BULK_INSERT_STARTED** |
+| **BULK_INSERT_ENDED** |
+| **BULK_INSERT_ERROR** |
+| **DELETE_ON_TOMBSTONE_REPLICATION** |
+| **CONFLICT** |
+| **COMMON** |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/changes/_how-to-subscribe-to-document-changes-nodejs.mdx b/versioned_docs/version-7.1/client-api/changes/_how-to-subscribe-to-document-changes-nodejs.mdx
new file mode 100644
index 0000000000..27cb7f7883
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/changes/_how-to-subscribe-to-document-changes-nodejs.mdx
@@ -0,0 +1,207 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+Following methods allow you to subscribe to document changes:
+
+- [forDocument()](../../client-api/changes/how-to-subscribe-to-document-changes.mdx#fordocument)
+- [forDocumentsInCollection()](../../client-api/changes/how-to-subscribe-to-document-changes.mdx#fordocumentsincollection)
+- [forDocumentsStartingWith()](../../client-api/changes/how-to-subscribe-to-document-changes.mdx#fordocumentsstartingwith)
+- [forAllDocuments()](../../client-api/changes/how-to-subscribe-to-document-changes.mdx#foralldocuments)
+
+## forDocument
+
+Single document changes can be observed using `forDocument()` method.
+
+### Syntax
+
+
+
+{`store.changes().forDocument(docId);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **docId** | string | ID of a document for which notifications will be processed. |
+
+| Return Value | |
+| ------------- | ----- |
+| IChangesObservable<[DocumentChange](../../client-api/changes/how-to-subscribe-to-document-changes.mdx#documentchange)> | Observable that allows to add listeners for events for given document. |
+
+### Example
+
+
+
+{`store.changes().forDocument("employees/1")
+ .on("error", err => \{
+ //handle error
+ \})
+ .on("data", change => \{
+ switch (change.type) \{
+ case "Put":
+ // do something
+ break;
+ case "Delete":
+ // do something
+ break;
+ \}
+ \});
+`}
+
+
+
+
+
+## forDocumentsInCollection
+
+To observe all document changes in particular collection use `forDocumentInCollection()` method. This method filters documents by `@collection` metadata property value.
+
+### Syntax
+
+
+
+{`store.changes().forDocumentsInCollection(collectionName);
+store.changes().forDocumentsInCollection(clazz);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **collectionName** | string | Name of document collection for which notifications will be processed. |
+
+| Return Value | |
+| ------------- | ----- |
+| IChangesObservable<[DocumentChange](../../client-api/changes/how-to-subscribe-to-document-changes.mdx#documentchange)> | Observable that allows to add subscriptions to notifications for given document collection name. |
+
+
+Overload with entity type uses `conventions.getCollectionNameForType()` to get collection name.
+
+
+### Example
+
+
+
+{`store.changes().forDocumentsInCollection(Employee)
+ .on("data", change => \{
+ console.log(change.type + " on document " + change.id);
+ \});
+`}
+
+
+
+or
+
+
+
+{`const collectionName = store.conventions.getCollectionNameForType(Employee);
+store.changes()
+ .forDocumentsInCollection(collectionName)
+ .on("data", change => \{
+ console.log(change.type + " on document " + change.id);
+ \});
+`}
+
+
+
+
+
+## forDocumentsStartingWith
+
+To observe all document changes for documents with ID that contains given prefix use `forDocumentsStartingWith()` method.
+
+### Syntax
+
+
+
+{`store.changes().forDocumentsStartingWith(docIdPrefix);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **docIdPrefix** | string | Document ID prefix for which notifications will be processed. |
+
+| Return Value | |
+| ------------- | ----- |
+| IChangesObservable<[DocumentChange](../../client-api/changes/how-to-subscribe-to-document-changes.mdx#documentchange)> | Observable that allows to add subscriptions to notifications for given document ID prefix. |
+
+### Example
+
+
+
+{`store.changes()
+ .forDocumentsStartingWith("employees/1") // employees/1, employees/10, employees/11, etc.
+ .on("data", change => \{
+ console.log(change.type + " on document " + change.id);
+ \});
+`}
+
+
+
+
+
+## forAllDocuments
+
+To observe all document changes use `forAllDocuments()` method.
+
+### Syntax
+
+
+
+{`store.changes().forAllDocuments();
+`}
+
+
+
+| Return Value | |
+| ------------- | ----- |
+| IChangesObservable<[DocumentChange](../../client-api/changes/how-to-subscribe-to-document-changes.mdx#documentchange)> | Observable that allows to add subscriptions to notifications for all documents. |
+
+### Example
+
+
+
+{`store.changes().forAllDocuments()
+ .on("data", change => \{
+ console.log(change.type + " on document " + change.id);
+ \});
+`}
+
+
+
+
+
+## DocumentChange
+
+| Name | Type | Description |
+| ------------- | ------------- | ----- |
+| **type** | [DocumentChangeTypes](../../client-api/changes/how-to-subscribe-to-document-changes.mdx#documentchangetypes) | Document change type enum |
+| **id** | string | Document identifier |
+| **collectionName** | string | Document's collection name |
+| **typeName** | string | Type name |
+| **changeVector** | string | Document's ChangeVector|
+
+
+
+## DocumentChangeTypes
+
+| Name |
+| ---- |
+| **None** |
+| **Put** |
+| **Delete** |
+| **BulkInsertStarted** |
+| **BulkInsertEnded** |
+| **BulkInsertError** |
+| **DeleteOnTombstoneReplication** |
+| **Conflict** |
+| **Common** |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/changes/_how-to-subscribe-to-index-changes-csharp.mdx b/versioned_docs/version-7.1/client-api/changes/_how-to-subscribe-to-index-changes-csharp.mdx
new file mode 100644
index 0000000000..f7c65184ce
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/changes/_how-to-subscribe-to-index-changes-csharp.mdx
@@ -0,0 +1,159 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+Following methods allow you to subscribe to index changes:
+
+- [ForIndex](../../client-api/changes/how-to-subscribe-to-index-changes.mdx#forindex)
+- [ForAllIndexes](../../client-api/changes/how-to-subscribe-to-index-changes.mdx#forallindexes)
+
+## ForIndex
+
+Index changes for one index can be observed using `ForIndex` method.
+
+### Syntax
+
+
+
+{`IChangesObservable ForIndex(string indexName);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **indexName** | string | Name of an index for which notifications will be processed. |
+
+| Return value | |
+| ------------- | ----- |
+| IChangesObservable<[IndexChange](../../client-api/changes/how-to-subscribe-to-index-changes.mdx#indexchange)> | Observable that allows to add subscriptions to notifications for index with given name. |
+
+### Example
+
+
+
+{`IDisposable subscription = store
+ .Changes()
+ .ForIndex("Orders/All")
+ .Subscribe(
+ change =>
+ \{
+ switch (change.Type)
+ \{
+ case IndexChangeTypes.None:
+ //Do someting
+ break;
+ case IndexChangeTypes.BatchCompleted:
+ //Do someting
+ break;
+ case IndexChangeTypes.IndexAdded:
+ //Do someting
+ break;
+ case IndexChangeTypes.IndexRemoved:
+ //Do someting
+ break;
+ case IndexChangeTypes.IndexDemotedToIdle:
+ //Do someting
+ break;
+ case IndexChangeTypes.IndexPromotedFromIdle:
+ //Do someting
+ break;
+ case IndexChangeTypes.IndexDemotedToDisabled:
+ //Do someting
+ break;
+ case IndexChangeTypes.IndexMarkedAsErrored:
+ //Do someting
+ break;
+ case IndexChangeTypes.SideBySideReplace:
+ //Do someting
+ break;
+ case IndexChangeTypes.IndexPaused:
+ //Do someting
+ break;
+ case IndexChangeTypes.LockModeChanged:
+ //Do someting
+ break;
+ case IndexChangeTypes.PriorityChanged:
+ //Do someting
+ break;
+ default:
+ throw new ArgumentOutOfRangeException();
+ \}
+ \});
+`}
+
+
+
+
+
+## ForAllIndexes
+
+Index changes for all indexex can be observed using `ForAllIndexes` method.
+
+| Return value | |
+| ------------- | ----- |
+| IChangesObservable<[IndexChange](../../client-api/changes/how-to-subscribe-to-index-changes.mdx#indexchange)> | Observable that allows to add subscriptions to notifications for all indexes. |
+
+### Syntax
+
+
+
+{`IChangesObservable ForAllIndexes();
+`}
+
+
+
+### Example
+
+
+
+{`IDisposable subscription = store
+ .Changes()
+ .ForAllIndexes()
+ .Subscribe(change => Console.WriteLine("\{0\} on index \{1\}", change.Type, change.Name));
+`}
+
+
+
+
+
+## IndexChange
+
+### Properties
+
+| Name | Type | Description |
+| ------------- | ------------- | ----- |
+| **Type** | [IndexChangeTypes](../../client-api/changes/how-to-subscribe-to-index-changes.mdx#indexchangetypes) | Change type |
+| **Name** | string | Index name |
+| **Etag** | long? | Index Etag |
+
+
+
+## IndexChangeTypes
+
+| Name | Value |
+| ---- | ----- |
+| **None** | `0` |
+| **BatchCompleted** | `1` |
+| **IndexAdded** | `8` |
+| **IndexRemoved** | `16` |
+| **IndexDemotedToIdle** | `32` |
+| **IndexPromotedFromIdle** | `64` |
+| **IndexDemotedToDisabled** | `256` |
+| **IndexMarkedAsErrored** | `512` |
+| **SideBySideReplace** | `1024` |
+| **IndexPaused** | `4096` |
+| **LockModeChanged** | `8192` |
+| **PriorityChanged** | `16384` |
+
+
+
+## Remarks
+
+
+To get more method overloads, especially ones supporting **delegates**, please add the
+[System.Reactive.Core](https://www.nuget.org/packages/System.Reactive.Core/) package to your project.
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/changes/_how-to-subscribe-to-index-changes-java.mdx b/versioned_docs/version-7.1/client-api/changes/_how-to-subscribe-to-index-changes-java.mdx
new file mode 100644
index 0000000000..caa4af0fb7
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/changes/_how-to-subscribe-to-index-changes-java.mdx
@@ -0,0 +1,155 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+Following methods allow you to subscribe to index changes:
+
+- [forIndex](../../client-api/changes/how-to-subscribe-to-index-changes.mdx#forindex)
+- [forAllIndexes](../../client-api/changes/how-to-subscribe-to-index-changes.mdx#forallindexes)
+
+## forIndex
+
+Index changes for one index can be observed using `forIndex` method.
+
+### Syntax
+
+
+
+{`IChangesObservable forIndex(String indexName);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **indexName** | String | Name of an index for which notifications will be processed. |
+
+| Return value | |
+| ------------- | ----- |
+| IChangesObservable<[IndexChange](../../client-api/changes/how-to-subscribe-to-index-changes.mdx#indexchange)> | Observable that allows to add subscriptions to notifications for index with given name. |
+
+### Example
+
+
+
+{`CleanCloseable subscription = store
+ .changes()
+ .forIndex("Orders/All")
+ .subscribe(Observers.create(change -> \{
+ switch (change.getType()) \{
+ case NONE:
+ // do something
+ break;
+ case BATCH_COMPLETED:
+ // do something
+ break;
+ case INDEX_ADDED:
+ // do something
+ break;
+ case INDEX_REMOVED:
+ // do something
+ break;
+ case INDEX_DEMOTED_TO_IDLE:
+ // do something
+ break;
+ case INDEX_PROMOTED_FROM_IDLE:
+ // do something
+ break;
+ case INDEX_DEMOTED_TO_DISABLED:
+ // do something
+ break;
+ case INDEX_MARKED_AS_ERRORED:
+ // do something
+ break;
+ case SIDE_BY_SIDE_REPLACE:
+ // do something
+ break;
+ case RENAMED:
+ // do something
+ break;
+ case INDEX_PAUSED:
+ // do something
+ break;
+ case LOCK_MODE_CHANGED:
+ // do something
+ break;
+ case PRIORITY_CHANGED:
+ // do something
+ break;
+ default:
+ throw new IllegalArgumentException();
+ \}
+ \}));
+`}
+
+
+
+
+
+## forAllIndexes
+
+Index changes for all indexex can be observed using `forAllIndexes` method.
+
+| Return value | |
+| ------------- | ----- |
+| IChangesObservable<[IndexChange](../../client-api/changes/how-to-subscribe-to-index-changes.mdx#indexchange)> | Observable that allows to add subscriptions to notifications for all indexes. |
+
+### Syntax
+
+
+
+{`IChangesObservable forAllIndexes();
+`}
+
+
+
+### Example
+
+
+
+{`CleanCloseable subscription = store
+ .changes()
+ .forAllIndexes()
+ .subscribe(Observers.create(change -> \{
+ System.out.println(change.getType() + " on index " + change.getName());
+ \}));
+`}
+
+
+
+
+
+## IndexChange
+
+### Properties
+
+| Name | Type | Description |
+| ------------- | ------------- | ----- |
+| **Type** | [IndexChangeTypes](../../client-api/changes/how-to-subscribe-to-index-changes.mdx#indexchangetypes) | Change type |
+| **Name** | String | Index name |
+| **Etag** | Long | Index Etag |
+
+
+
+## IndexChangeTypes
+
+| Name |
+| ---- |
+| **NONE** |
+| **BATCH_COMPLETED** |
+| **INDEX_ADDED** |
+| **INDEX_REMOVED** |
+| **INDEX_DEMOTED_TO_IDLE** |
+| **INDEX_PROMOTED_TO_IDLE** |
+| **INDEX_DEMOTED_TO_DISABLED** |
+| **INDEX_MARKED_AS_ERRORED** |
+| **SIDE_BY_SIDE_REPLACE** |
+| **RENAMED** |
+| **INDEX_PAUSED** |
+| **LOCK_MODE_CHANGED** |
+| **PRIORITY_CHANGED** |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/changes/_how-to-subscribe-to-index-changes-nodejs.mdx b/versioned_docs/version-7.1/client-api/changes/_how-to-subscribe-to-index-changes-nodejs.mdx
new file mode 100644
index 0000000000..97601e4904
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/changes/_how-to-subscribe-to-index-changes-nodejs.mdx
@@ -0,0 +1,151 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+Following methods allow you to subscribe to index changes:
+
+- [forIndex()](../../client-api/changes/how-to-subscribe-to-index-changes.mdx#forindex)
+- [forAllIndexes()](../../client-api/changes/how-to-subscribe-to-index-changes.mdx#forallindexes)
+
+## forIndex
+
+Index changes for one index can be observed using `forIndex()` method.
+
+### Syntax
+
+
+
+{`store.changes().forIndex(indexName);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **indexName** | string | Name of an index for which notifications will be processed. |
+
+| Return value | |
+| ------------- | ----- |
+| IChangesObservable<[IndexChange](../../client-api/changes/how-to-subscribe-to-index-changes.mdx#indexchange)> | Observable that allows to add subscriptions to notifications for index with given name. |
+
+### Example
+
+
+
+{`store.changes().forIndex("Orders/All")
+ .on("data", change => \{
+ switch (change.type) \{
+ case "None":
+ // do something
+ break;
+ case "BatchCompleted":
+ // do something
+ break;
+ case "IndexAdded":
+ // do something
+ break;
+ case "IndexRemoved":
+ // do something
+ break;
+ case "IndexDemotedToIdle":
+ // do something
+ break;
+ case "IndexPromotedFromIdle":
+ // do something
+ break;
+ case "IndexDemotedToDisabled":
+ // do something
+ break;
+ case "IndexMarkedAsErrored":
+ // do something
+ break;
+ case "SideBySideReplace":
+ // do something
+ break;
+ case "Renamed":
+ // do something
+ break;
+ case "IndexPaused":
+ // do something
+ break;
+ case "LockModeChanged":
+ // do something
+ break;
+ case "PriorityChanged":
+ // do something
+ break;
+ default:
+ throw new Error("Not supported.");
+ \}
+ \});
+`}
+
+
+
+
+
+## forAllIndexes
+
+Index changes for all indexex can be observed using `forAllIndexes()` method.
+
+| Return value | |
+| ------------- | ----- |
+| IChangesObservable<[IndexChange](../../client-api/changes/how-to-subscribe-to-index-changes.mdx#indexchange)> | Observable that allows to add subscriptions to notifications for all indexes. |
+
+### Syntax
+
+
+
+{`store.changes().forAllIndexes();
+`}
+
+
+
+### Example
+
+
+
+{`store.changes().forAllIndexes()
+ .on("data", change => \{
+ console.log(change.type + " on index " + change.name);
+ \});
+`}
+
+
+
+
+
+## IndexChange
+
+### Properties
+
+| Name | Type | Description |
+| ------------- | ------------- | ----- |
+| **type** | [IndexChangeTypes](../../client-api/changes/how-to-subscribe-to-index-changes.mdx#indexchangetypes) | Change type |
+| **name** | string | Index name |
+| **etag** | number | Index Etag |
+
+
+
+## IndexChangeTypes
+
+| Name |
+| ---- |
+| **None** |
+| **BatchCompleted** |
+| **IndexAdded** |
+| **IndexRemoved** |
+| **IndexDemotedToIdle** |
+| **IndexPromotedToIdle** |
+| **IndexDemotedToDisabled** |
+| **IndexMarkedAsErrored** |
+| **SideBySideReplace** |
+| **Renamed** |
+| **IndexPaused** |
+| **LockModeChanged** |
+| **PriorityChanged** |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/changes/_how-to-subscribe-to-operation-changes-csharp.mdx b/versioned_docs/version-7.1/client-api/changes/_how-to-subscribe-to-operation-changes-csharp.mdx
new file mode 100644
index 0000000000..97d0889015
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/changes/_how-to-subscribe-to-operation-changes-csharp.mdx
@@ -0,0 +1,168 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+The following methods allow you to subscribe to operation changes:
+
+- [ForOperationId](../../client-api/changes/how-to-subscribe-to-operation-changes.mdx#foroperationid)
+- [ForAllOperations](../../client-api/changes/how-to-subscribe-to-operation-changes.mdx#foralloperations)
+
+## ForOperationId
+
+Operation changes for one operation can be observed using the `ForOperationId` method.
+
+
+Please note that from RavenDB 6.2 on, operation changes can be tracked only on a **specific node**.
+The purpose of this change is to improve results consistency, as an operation may behave very differently
+on different nodes and cross-cluster tracking of an operation may become confusing and ineffective if
+the operation fails over from one node to another.
+Tracking operations will therefore be possible only if the `Changes` API was
+[opened](../../client-api/changes/what-is-changes-api.mdx#accessing-changes-api) using a method that limits
+tracking to a single node: `store.Changes(dbName, nodeTag)`
+
+
+### Syntax
+
+
+
+{`IChangesObservable ForOperationId(long operationId);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **operationId** | long | ID of an operation for which notifications will be processed. |
+
+| Return value | |
+| ------------- | ----- |
+| IChangesObservable<[OperationStatusChange](../../client-api/changes/how-to-subscribe-to-operation-changes.mdx#operationchange)> | Observable that allows you to add subscriptions to notifications for an operation with a given ID. |
+
+### Example
+
+
+
+{`IDisposable subscription = store
+ .Changes(dbName, nodeTag)
+ .ForOperationId(operationId)
+ .Subscribe(
+ change =>
+ \{
+ switch (change.State.Status)
+ \{
+ case OperationStatus.InProgress:
+ //Do Something
+ break;
+ case OperationStatus.Completed:
+ //Do Something
+ break;
+ case OperationStatus.Faulted:
+ //Do Something
+ break;
+ case OperationStatus.Canceled:
+ //Do Something
+ break;
+ default:
+ throw new ArgumentOutOfRangeException();
+ \}
+ \});
+`}
+
+
+
+
+
+## ForAllOperations
+
+Operations changes for all Operations can be observed using the `ForAllOperations` method.
+
+
+Please note that from RavenDB 6.2 on, operation changes can be tracked only on a **specific node**.
+The purpose of this change is to improve results consistency, as an operation may behave very differently
+on different nodes and cross-cluster tracking of an operation may become confusing and ineffective if
+the operation fails over from one node to another.
+Tracking operations will therefore be possible only if the `Changes` API was
+[opened](../../client-api/changes/what-is-changes-api.mdx#accessing-changes-api) using a method that limits
+tracking to a single node: `store.Changes(dbName, nodeTag)`
+
+
+| Return Value | |
+| ------------- | ----- |
+| IChangesObservable<[OperationStatusChange](../../client-api/changes/how-to-subscribe-to-operation-changes.mdx#operationchange)> | Observable that allows to add subscriptions to notifications for all operations. |
+
+### Syntax
+
+
+
+{`IChangesObservable ForAllOperations();
+`}
+
+
+
+### Example
+
+
+
+{`IDisposable subscription = store
+ .Changes(dbName, nodeTag)
+ .ForAllOperations()
+ .Subscribe(change => Console.WriteLine("Operation #\{1\} reports progress: \{0\}", change.State.Progress.ToJson(), change.OperationId));
+`}
+
+
+
+
+
+## OperationChange
+
+### Properties
+
+| Name | Type | Description |
+| ------------- | ------------- | ----- |
+| **State** | [OperationState](../../client-api/changes/how-to-subscribe-to-operation-changes.mdx#operationstate) | Operation state |
+| **OperationId** | long | Operation ID |
+
+
+
+## OperationState
+
+### Members
+
+| Name | Type | Description |
+| ------------- | ------------- | ----- |
+| **Result** | [IOperationResult](../../client-api/changes/how-to-subscribe-to-operation-changes.mdx#operationresult) | Operation result |
+| **Progress** | IOperationProgress| Instance of IOperationProgress (json representation of the progress) |
+| **Status** | [OperationStatus](../../client-api/changes/how-to-subscribe-to-operation-changes.mdx#operationstatus) | Operation status |
+
+
+## OperationResult
+
+### Members
+
+| Name | Type | Description |
+| ------------- | ------------- | ----- |
+| **Message** | string | Operation message |
+| **ShouldPersist** | bool | determine whether or not the result should be saved in the storage |
+
+
+## OperationStatus
+
+# OperationStatus (enum)
+
+| Name | Description |
+| ---- | ----- |
+| **InProgress** | `Indicates that the operation made progress` |
+| **Completed** | `Indicates that the operation has completed` |
+| **Faulted** | `Indicates that the operation is faulted` |
+| **Canceled** | `Indicates that the operation has been Canceled` |
+
+
+## Remarks
+
+
+To get more method overloads, especially ones supporting **delegates**, please add the
+[System.Reactive.Core](https://www.nuget.org/packages/System.Reactive.Core/) package to your project.
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/changes/_how-to-subscribe-to-operation-changes-java.mdx b/versioned_docs/version-7.1/client-api/changes/_how-to-subscribe-to-operation-changes-java.mdx
new file mode 100644
index 0000000000..afd9a0c2d2
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/changes/_how-to-subscribe-to-operation-changes-java.mdx
@@ -0,0 +1,95 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+The following methods allow you to subscribe to operation changes:
+
+- [forOperationId](../../client-api/changes/how-to-subscribe-to-operation-changes.mdx#foroperation)
+- [forAllOperations](../../client-api/changes/how-to-subscribe-to-operation-changes.mdx#foralloperations)
+
+## forOperation
+
+Operation changes for one operation can be observed using the `forOperationId` method.
+
+### Syntax
+
+
+
+{`IChangesObservable forOperationId(long operationId);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **operationId** | long | ID of an operation for which notifications will be processed. |
+
+| Return value | |
+| ------------- | ----- |
+| IChangesObservable<[OperationStatusChange](../../client-api/changes/how-to-subscribe-to-operation-changes.mdx#operationchange)> | Observable that allows you to add subscriptions to notifications for an operation with a given ID. |
+
+### Example
+
+
+
+{`CleanCloseable subscription = store
+ .changes()
+ .forOperationId(operationId)
+ .subscribe(Observers.create(change -> \{
+ ObjectNode operationState = change.getState();
+
+ // do something
+ \}));
+`}
+
+
+
+
+
+## forAllOperations
+
+Operations changes for all Operations can be observed using the `forAllOperations` method.
+
+| Return Value | |
+| ------------- | ----- |
+| IChangesObservable<[OperationStatusChange](../../client-api/changes/how-to-subscribe-to-operation-changes.mdx#operationchange)> | Observable that allows to add subscriptions to notifications for all operations. |
+
+### Syntax
+
+
+
+{`IChangesObservable forAllOperations();
+`}
+
+
+
+### Example
+
+
+
+{`CleanCloseable subscription = store
+ .changes()
+ .forAllOperations()
+ .subscribe(Observers.create(change -> \{
+ System.out.println("Operation #" + change.getOperationId());
+ \}));
+`}
+
+
+
+
+
+## OperationChange
+
+### Properties
+
+| Name | Type | Description |
+| ------------- | ------------- | ----- |
+| **State** | ObjectNode | Operation state |
+| **OperationId** | long | Operation ID |
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/changes/_how-to-subscribe-to-operation-changes-nodejs.mdx b/versioned_docs/version-7.1/client-api/changes/_how-to-subscribe-to-operation-changes-nodejs.mdx
new file mode 100644
index 0000000000..a7de35dedf
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/changes/_how-to-subscribe-to-operation-changes-nodejs.mdx
@@ -0,0 +1,91 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+The following methods allow you to subscribe to operation changes:
+
+- [forOperationId()](../../client-api/changes/how-to-subscribe-to-operation-changes.mdx#foroperation)
+- [forAllOperations()](../../client-api/changes/how-to-subscribe-to-operation-changes.mdx#foralloperations)
+
+## forOperation
+
+Operation changes for one operation can be observed using the `forOperationId()` method.
+
+### Syntax
+
+
+
+{`store.changes().forOperationId(operationId);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **operationId** | number | ID of an operation for which notifications will be processed. |
+
+| Return value | |
+| ------------- | ----- |
+| IChangesObservable<[OperationStatusChange](../../client-api/changes/how-to-subscribe-to-operation-changes.mdx#operationchange)> | Observable that allows you to add subscriptions to notifications for an operation with a given ID. |
+
+### Example
+
+
+
+{`store.changes().forOperationId(operationId)
+ .on("data", change => \{
+ const operationState = change.state;
+
+ // do something
+ \});
+`}
+
+
+
+
+
+## forAllOperations
+
+Operations changes for all Operations can be observed using the `forAllOperations()` method.
+
+| Return Value | |
+| ------------- | ----- |
+| IChangesObservable<[OperationStatusChange](../../client-api/changes/how-to-subscribe-to-operation-changes.mdx#operationchange)> | Observable that allows to add subscriptions to notifications for all operations. |
+
+### Syntax
+
+
+
+{`store.changes().forAllOperations();
+`}
+
+
+
+### Example
+
+
+
+{`store.changes().forAllOperations()
+ .on("data", change => \{
+ console.log("Operation #" + change.operationId);
+ \});
+`}
+
+
+
+
+
+## OperationChange
+
+### Properties
+
+| Name | Type | Description |
+| ------------- | ------------- | ----- |
+| **state** | object | Operation state |
+| **operationId** | number | Operation ID |
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/changes/_how-to-subscribe-to-time-series-changes-csharp.mdx b/versioned_docs/version-7.1/client-api/changes/_how-to-subscribe-to-time-series-changes-csharp.mdx
new file mode 100644
index 0000000000..b1cad23dbf
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/changes/_how-to-subscribe-to-time-series-changes-csharp.mdx
@@ -0,0 +1,231 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use the following methods to subscribe to Time Series Changes:
+ * `ForTimeSeries`
+ Track **all** time series with a given name
+ * `ForTimeSeriesOfDocument`
+ Overload #1: Track **a specific** time series of a chosen document
+ Overload #2: Track **any** time series of a chosen document
+ * `ForAllTimeSeries`
+ Track **all** time series
+
+* In this page:
+ * [ForTimeSeries](../../client-api/changes/how-to-subscribe-to-time-series-changes.mdx#fortimeseries)
+ * [ForTimeSeriesOfDocument](../../client-api/changes/how-to-subscribe-to-time-series-changes.mdx#fortimeseriesofdocument)
+ * [ForAllTimeSeries](../../client-api/changes/how-to-subscribe-to-time-series-changes.mdx#foralltimeseries)
+
+
+## ForTimeSeries
+
+Subscribe to changes in **all time-series with a given name**, no matter which document they belong to,
+using the `ForTimeSeries` method.
+
+#### Syntax
+
+
+
+{`IChangesObservable ForTimeSeries(string timeSeriesName);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **timeSeriesName** | string | Name of a time series to subscribe to. |
+
+| Return Value | |
+| ------------- | ----- |
+| IChangesObservable<[TimeSeriesChange](../../client-api/changes/how-to-subscribe-to-time-series-changes.mdx#timeserieschange)> | Observable that allows to add subscriptions to time series notifications. |
+
+#### Example
+
+
+
+{`IDisposable subscription = store
+ .Changes()
+ .ForTimeSeries("Likes")
+ .Subscribe
+ (change =>
+ \{
+ switch (change.Type)
+ \{
+ case TimeSeriesChangeTypes.Delete:
+ // do something
+ break;
+ \}
+ \});
+`}
+
+
+
+
+
+## ForTimeSeriesOfDocument
+
+Use `ForTimeSeriesOfDocument` to subscribe to changes in **time series of a chosen document**.
+
+* Two overload methods allow you to
+ * Track **a specific** time series of the chosen document
+ * Track **any** time series of the chosen document
+### Overload #1
+Use this `ForTimeSeriesOfDocument` overload to track changes in a **specific time** series of the chosen document.
+
+#### Syntax
+
+
+
+{`IChangesObservable ForTimeSeriesOfDocument(string documentId, string timeSeriesName);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **documentId** | string | ID of a document to subscribe to. |
+| **timeSeriesName** | string | Name of a time series to subscribe to. |
+
+| Return Value | |
+| ------------- | ----- |
+| IChangesObservable<[TimeSeriesChange](../../client-api/changes/how-to-subscribe-to-time-series-changes.mdx#timeserieschange)> | Observable that allows to add subscriptions to time series notifications. |
+
+#### Example
+
+
+
+{`IDisposable subscription = store
+ .Changes()
+ .ForTimeSeriesOfDocument("companies/1-A", "Likes")
+ .Subscribe
+ (change =>
+ \{
+ switch (change.Type)
+ \{
+ case TimeSeriesChangeTypes.Delete:
+ // do something
+ break;
+ \}
+ \});
+`}
+
+
+### Overload #2
+Use this `ForTimeSeriesOfDocument` overload to track changes in **any time series** of the chosen document.
+
+#### Syntax
+
+
+
+{`IChangesObservable ForTimeSeriesOfDocument(string documentId);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **documentId** | string | ID of a document to subscribe to. |
+
+| Return Value | |
+| ------------- | ----- |
+| IChangesObservable<[TimeSeriesChange](../../client-api/changes/how-to-subscribe-to-time-series-changes.mdx#timeserieschange)> | Observable that allows to add subscriptions to time series notifications. |
+
+#### Example
+
+
+
+{`IDisposable subscription = store
+ .Changes()
+ .ForTimeSeriesOfDocument("companies/1-A")
+ .Subscribe
+ (change =>
+ \{
+ switch (change.Type)
+ \{
+ case TimeSeriesChangeTypes.Delete:
+ // do something
+ break;
+ \}
+ \});
+`}
+
+
+
+
+
+## ForAllTimeSeries
+
+Subscribe to changes in **all time-series** using the `ForAllTimeSeries` method.
+
+#### Syntax
+
+
+
+{`IChangesObservable ForAllTimeSeries();
+`}
+
+
+
+| Return Value | |
+| ------------- | ----- |
+| IChangesObservable<[TimeSeriesChange](../../client-api/changes/how-to-subscribe-to-time-series-changes.mdx#timeserieschange)> | Observable that allows to add subscriptions to time series notifications. |
+
+#### Example
+
+
+
+{`IDisposable subscription = store
+ .Changes()
+ .ForAllTimeSeries()
+ .Subscribe
+ (change =>
+ \{
+ switch (change.Type)
+ \{
+ case TimeSeriesChangeTypes.Delete:
+ // do something
+ break;
+ \}
+ \});
+`}
+
+
+
+
+
+## TimeSeriesChange
+
+| Name | Type | Description |
+| ------------- | ------------- | ----- |
+| **Type** | [TimeSeriesChangeTypes](../../client-api/changes/how-to-subscribe-to-time-series-changes.mdx#timeserieschangetypes) | Time series change type enum |
+| **Name** | string | Time Series Name |
+| **DocumentId** | string | Time series Document Identifier |
+| **CollectionName** | string | Time series document Collection Name |
+| **From** | DateTime | Time series values From date |
+| **To** | DateTime | Time series values To date |
+| **ChangeVector** | string | Time series Change Vector |
+
+
+
+## TimeSeriesChangeTypes
+
+| Name | Value |
+| ---- | ----- |
+| **None** | `0` |
+| **Put** | `1` |
+| **Delete** | `2` |
+| **Mixed** | `3` |
+
+
+
+## Remarks
+
+
+To get more method overloads, especially ones supporting **delegates**, please add the
+[System.Reactive.Core](https://www.nuget.org/packages/System.Reactive.Core/) package to your project.
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/changes/_what-is-changes-api-csharp.mdx b/versioned_docs/version-7.1/client-api/changes/_what-is-changes-api-csharp.mdx
new file mode 100644
index 0000000000..057621ea30
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/changes/_what-is-changes-api-csharp.mdx
@@ -0,0 +1,210 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* The Changes API is a Push Notifications service, that allows a RavenDB Client to
+ receive messages from a RavenDB Server regarding events that occurred on the server.
+* A client can subscribe to events related to documents, indexes, operations, counters, and time series.
+* Using the Changes API allows you to notify users of various changes without requiring
+ any expensive polling.
+
+* In this page:
+ * [Accessing Changes API](../../client-api/changes/what-is-changes-api.mdx#accessing-changes-api)
+ * [Connection interface](../../client-api/changes/what-is-changes-api.mdx#connection-interface)
+ * [Subscribing](../../client-api/changes/what-is-changes-api.mdx#subscribing)
+ * [Unsubscribing](../../client-api/changes/what-is-changes-api.mdx#unsubscribing)
+ * [FAQ](../../client-api/changes/what-is-changes-api.mdx#faq)
+ * [Changes API and Database Timeout](../../client-api/changes/what-is-changes-api.mdx#changes-api-and-database-timeout)
+ * [Changes API and Method Overloads](../../client-api/changes/what-is-changes-api.mdx#changes-api-and-method-overloads)
+ * [Changes API -vs- Data Subscriptions](../../client-api/changes/what-is-changes-api.mdx#changes-api--vs--data-subscriptions)
+
+## Accessing Changes API
+
+The changes subscription is accessible by a document store through its `IDatabaseChanges`
+or `ISingleNodeDatabaseChanges` interfaces.
+
+
+
+{`IDatabaseChanges Changes(string database = null);
+`}
+
+
+
+
+{`ISingleNodeDatabaseChanges Changes(string database, string nodeTag);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **database** | `string` | Name of database to open changes API for. If `null`, the `Database` configured in DocumentStore will be used. |
+| **nodeTag** | `string` | Tag of the cluster node to open changes API for. |
+
+| Return value | |
+| ------------- | ----- |
+| `IDatabaseChanges` | Instance implementing `IDatabaseChanges` interface. |
+| `ISingleNodeDatabaseChanges` | Instance implementing `ISingleNodeDatabaseChanges` interface. |
+
+* Use `IDatabaseChanges` to subscribe to database changes.
+* Use `ISingleNodeDatabaseChanges` to limit tracking to a specific node.
+
+ Note that from RavenDB 6.2 on, some changes can be tracked not cross-cluster but only
+ **on a specific node**. In these cases, it is required that you open the Changes API using
+ the second overload, passing both a database name and a node tag: `store.Changes(dbName, nodeTag)`
+
+
+
+
+## Connection interface
+
+`IDatabaseChanges` inherits from `IConnectableChanges` interface that represent the connection.
+
+
+
+{`public interface IConnectableChanges : IDisposable
+ where TChanges : IDatabaseChanges
+\{
+ // returns state of the connection
+ bool Connected \{ get; \}
+
+ // A task that ensures that the connection to the server was established.
+ Task EnsureConnectedNow();
+
+ //An event handler to detect changed to the connection status
+ event EventHandler ConnectionStatusChanged;
+
+ //An action to take if an error occured in the connection to the server
+ event Action OnError;
+\}
+`}
+
+
+
+
+
+## Subscribing
+
+To receive notifications regarding server-side events, subscribe using one of the following methods.
+
+* **For Document Changes:**
+ - [ForAllDocuments](../../client-api/changes/how-to-subscribe-to-document-changes.mdx#foralldocuments)
+ Track changes for all document
+ - [ForDocument](../../client-api/changes/how-to-subscribe-to-document-changes.mdx#fordocument)
+ Track changes for a given document (by Doc ID)
+ - [ForDocumentsInCollection](../../client-api/changes/how-to-subscribe-to-document-changes.mdx#fordocumentsincollection)
+ Track changes for all documents in a given collection
+ - [ForDocumentsStartingWith](../../client-api/changes/how-to-subscribe-to-document-changes.mdx#fordocumentsstartingwith)
+ Track changes for documents whose ID contains a given prefix
+
+* **For Index Changes:**
+ - [ForAllIndexes](../../client-api/changes/how-to-subscribe-to-index-changes.mdx#forallindexes)
+ Track changes for all indexes
+ - [ForIndex](../../client-api/changes/how-to-subscribe-to-index-changes.mdx#forindex)
+ Track changes for a given index (by Index Name)
+
+* **For Operation Changes:**
+ Operation changes can be tracked only [on a specific node](../../client-api/changes/what-is-changes-api.mdx#accessing-changes-api).
+ - [ForAllOperations](../../client-api/changes/how-to-subscribe-to-operation-changes.mdx#foralloperations)
+ Track changes for all operation
+ - [ForOperationId](../../client-api/changes/how-to-subscribe-to-operation-changes.mdx#foroperationid)
+ Track changes for a given operation (by Operation ID)
+
+* **For Counter Changes:**
+ - [ForAllCounters](../../client-api/changes/how-to-subscribe-to-counter-changes.mdx#forallcounters)
+ Track changes for all counters
+ - [ForCounter](../../client-api/changes/how-to-subscribe-to-counter-changes.mdx#forcounter)
+ Track changes for a given counter (by Counter Name)
+ - [ForCounterOfDocument](../../client-api/changes/how-to-subscribe-to-counter-changes.mdx#forcounterofdocument)
+ Track changes for a specific counter of a chosen document (by Doc ID and Counter Name)
+ - [ForCountersOfDocument](../../client-api/changes/how-to-subscribe-to-counter-changes.mdx#forcountersofdocument)
+ Track changes for all counters of a chosen document (by Doc ID)
+
+* **For Time Series Changes:**
+ - [ForAllTimeSeries](../../client-api/changes/how-to-subscribe-to-time-series-changes.mdx#foralltimeseries)
+ Track changes for all time series
+ - [ForTimeSeries](../../client-api/changes/how-to-subscribe-to-time-series-changes.mdx#fortimeseries)
+ Track changes for all time series with a given name
+ - [ForTimeSeriesOfDocument](../../client-api/changes/how-to-subscribe-to-time-series-changes.mdx#fortimeseriesofdocument)
+ Track changes for -
+ * a **specific time series** of a given document (by Doc ID and Time Series Name)
+ * **any time series** of a given document (by Doc ID)
+
+
+
+## Unsubscribing
+
+To end a subscription (stop listening for particular notifications) you must
+`Dispose` of the subscription.
+
+
+
+{`IDatabaseChanges changes = store.Changes();
+await changes.EnsureConnectedNow();
+var subscription = changes
+ .ForAllDocuments()
+ .Subscribe(change => Console.WriteLine("\{0\} on document \{1\}", change.Type, change.Id));
+try
+\{
+ // application code here
+\}
+finally
+\{
+ if (subscription != null)
+ subscription.Dispose();
+\}
+`}
+
+
+
+
+
+## FAQ
+
+#### Changes API and Database Timeout
+
+One or more open Changes API connections will prevent a database from becoming
+idle and unloaded, regardless of [the configuration value for database idle timeout](../../server/configuration/database-configuration.mdx#databasesmaxidletimeinsec).
+#### Changes API and Method Overloads
+
+
+To get more method overloads, especially ones supporting **delegates**, please add the
+[System.Reactive.Core](https://www.nuget.org/packages/System.Reactive.Core/) package to your project.
+
+
+
+
+## Changes API -vs- Data Subscriptions
+
+**Changes API** and [Data Subscription](../../client-api/data-subscriptions/what-are-data-subscriptions.mdx)
+are services that a RavenDB Server provides subscribing clients.
+Both services respond to events that take place on the server, by sending updates
+to their subscribers.
+
+* **Changes API is a Push Notifications Service**.
+ * Changes API subscribers receive **notifications** regarding events that
+ took place on the server, without receiving the actual data entities
+ affected by these events.
+ For the modification of a document, for example, the client will receive
+ a [DocumentChange](../../client-api/changes/how-to-subscribe-to-document-changes.mdx#documentchange)
+ object with details like the document's ID and collection name.
+
+ * The server does **not** keep track of sent notifications or
+ checks clients' usage of them. It is a client's responsibility
+ to manage its reactions to such notifications.
+
+* **Data Subscription is a Data Consumption Service**.
+ * A Data Subscription task keeps track of document modifications in the
+ database and delivers the documents in an orderly fashion when subscribers
+ indicate they are ready to receive them.
+ * The process is fully managed by the server, leaving very little for
+ the subscribers to do besides consuming the delivered documents.
+
+| | Data Subscriptions | Changes API |
+|------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| What can the server Track | [Documents](../../client-api/data-subscriptions/what-are-data-subscriptions.mdx#documents-processing) [Revisions](../../client-api/data-subscriptions/advanced-topics/subscription-with-revisioning.mdx) [Counters](../../client-api/data-subscriptions/creation/examples.mdx#including-counters) Time Series | [Documents](../../client-api/changes/how-to-subscribe-to-document-changes.mdx) [Indexes](../../client-api/changes/how-to-subscribe-to-index-changes.mdx) [Operations](../../client-api/changes/how-to-subscribe-to-operation-changes.mdx) [Counters](../../client-api/changes/how-to-subscribe-to-counter-changes.mdx) [Time Series](../../client-api/changes/how-to-subscribe-to-time-series-changes.mdx) |
+| What can the server Deliver | Documents Revisions Counters Time Series | Notifications |
+| Management | Managed by the Server | Managed by the Client |
diff --git a/versioned_docs/version-7.1/client-api/changes/_what-is-changes-api-java.mdx b/versioned_docs/version-7.1/client-api/changes/_what-is-changes-api-java.mdx
new file mode 100644
index 0000000000..a2049a09a9
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/changes/_what-is-changes-api-java.mdx
@@ -0,0 +1,153 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* The Changes API is a Push Notifications service, that allows a RavenDB Client to
+ receive messages from a RavenDB Server regarding events that occurred on the server.
+* A client can subscribe to events related to documents, indexes, operations, counters, and time series.
+* Using the Changes API allows you to notify users of various changes without requiring
+ any expensive polling.
+
+* In this page:
+ * [Accessing Changes API](../../client-api/changes/what-is-changes-api.mdx#accessing-changes-api)
+ * [Connection interface](../../client-api/changes/what-is-changes-api.mdx#connection-interface)
+ * [Subscribing](../../client-api/changes/what-is-changes-api.mdx#subscribing)
+ * [Unsubscribing](../../client-api/changes/what-is-changes-api.mdx#unsubscribing)
+ * [Note](../../client-api/changes/what-is-changes-api.mdx#note)
+ * [Changes API -vs- Data Subscriptions](../../client-api/changes/what-is-changes-api.mdx#changes-api--vs--data-subscriptions)
+
+## Accessing Changes API
+
+The changes subscription is accessible by a document store through its `IDatabaseChanges` interface.
+
+
+
+{`IDatabaseChanges changes();
+
+IDatabaseChanges changes(String database);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **database** | `String` | Name of database to open changes API for. If `null`, the `Database` configured in DocumentStore will be used. |
+
+| Return value | |
+| ------------- | ----- |
+| IDatabaseChanges | Instance implementing IDatabaseChanges interface. |
+
+
+
+## Connection interface
+
+`IDatabaseChanges` inherits from `IConnectableChanges` interface that represent the connection.
+
+
+
+{`public interface IConnectableChanges extends CleanCloseable \{
+
+ boolean isConnected();
+
+ void ensureConnectedNow();
+
+ void addConnectionStatusChanged(EventHandler handler);
+
+ void removeConnectionStatusChanged(EventHandler handler);
+
+ void addOnError(Consumer handler);
+
+ void removeOnError(Consumer handler);
+\}
+`}
+
+
+
+
+
+## Subscribing
+
+To receive notifications regarding server-side events, subscribe using one of the following methods.
+
+- [forAllDocuments](../../client-api/changes/how-to-subscribe-to-document-changes.mdx#foralldocuments)
+- [forAllIndexes](../../client-api/changes/how-to-subscribe-to-index-changes.mdx#forallindexes)
+- [forAllOperations](../../client-api/changes/how-to-subscribe-to-operation-changes.mdx#foralloperations)
+- [forDocument](../../client-api/changes/how-to-subscribe-to-document-changes.mdx#fordocument)
+- [forDocumentsInCollection](../../client-api/changes/how-to-subscribe-to-document-changes.mdx#fordocumentsincollection)
+- [forDocumentsStartingWith](../../client-api/changes/how-to-subscribe-to-document-changes.mdx#fordocumentsstartingwith)
+- [forIndex](../../client-api/changes/how-to-subscribe-to-index-changes.mdx#forindex)
+- [forOperationId](../../client-api/changes/how-to-subscribe-to-operation-changes.mdx#foroperation)
+
+
+
+## Unsubscribing
+
+To end a subscription (stop listening for particular notifications) you must
+`close` the subscription.
+
+
+
+{`IDatabaseChanges subscription = store.changes();
+
+subscription.ensureConnectedNow();
+
+subscription.forAllDocuments().subscribe(Observers.create(change -> \{
+ System.out.println(change.getType() + " on document " + change.getId());
+\}));
+
+try \{
+ // application code here
+\} finally \{
+ if (subscription != null) \{
+ subscription.close();
+ \}
+\}
+`}
+
+
+
+
+
+## Note
+
+
+One or more open Changes API connections will prevent a database from becoming
+idle and unloaded, regardless of [the configuration value for database idle timeout](../../server/configuration/database-configuration.mdx#databasesmaxidletimeinsec).
+
+
+
+
+## Changes API -vs- Data Subscriptions
+
+**Changes API** and [Data Subscription](../../client-api/data-subscriptions/what-are-data-subscriptions.mdx)
+are services that a RavenDB Server provides subscribing clients.
+Both services respond to events that take place on the server, by sending updates
+to their subscribers.
+
+* **Changes API is a Push Notifications Service**.
+ * Changes API subscribers receive **notifications** regarding events that
+ took place on the server, without receiving the actual data entities
+ affected by these events.
+ For the modification of a document, for example, the client will receive
+ a [DocumentChange](../../client-api/changes/how-to-subscribe-to-document-changes.mdx#documentchange)
+ object with details like the document's ID and collection name.
+
+ * The server does **not** keep track of sent notifications or
+ checks clients' usage of them. It is a client's responsibility
+ to manage its reactions to such notifications.
+
+* **Data Subscription is a Data Consumption Service**.
+ * A Data Subscription task keeps track of document modifications in the
+ database and delivers the documents in an orderly fashion when subscribers
+ indicate they are ready to receive them.
+ * The process is fully managed by the server, leaving very little for
+ the subscribers to do besides consuming the delivered documents.
+
+| | Data Subscriptions | Changes API |
+|------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| What can the server Track | [Documents](../../client-api/data-subscriptions/what-are-data-subscriptions.mdx#documents-processing) [Revisions](../../client-api/data-subscriptions/advanced-topics/subscription-with-revisioning.mdx) [Counters](../../client-api/data-subscriptions/creation/examples.mdx#including-counters) Time Series | [Documents](../../client-api/changes/how-to-subscribe-to-document-changes.mdx) [Indexes](../../client-api/changes/how-to-subscribe-to-index-changes.mdx) [Operations](../../client-api/changes/how-to-subscribe-to-operation-changes.mdx) [Counters](../../client-api/changes/how-to-subscribe-to-counter-changes.mdx) [Time Series](../../client-api/changes/how-to-subscribe-to-time-series-changes.mdx) |
+| What can the server Deliver | Documents Revisions Counters Time Series | Notifications |
+| Management | Managed by the Server | Managed by the Client |
diff --git a/versioned_docs/version-7.1/client-api/changes/_what-is-changes-api-nodejs.mdx b/versioned_docs/version-7.1/client-api/changes/_what-is-changes-api-nodejs.mdx
new file mode 100644
index 0000000000..62d76f1a71
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/changes/_what-is-changes-api-nodejs.mdx
@@ -0,0 +1,152 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* The Changes API is a Push Notifications service, that allows a RavenDB Client to
+ receive messages from a RavenDB Server regarding events that occurred on the server.
+* A client can subscribe to events related to documents, indexes, operations, counters, and time series.
+* Using the Changes API allows you to notify users of various changes without requiring
+ any expensive polling.
+
+* In this page:
+ * [Accessing Changes API](../../client-api/changes/what-is-changes-api.mdx#accessing-changes-api)
+ * [Connection interface](../../client-api/changes/what-is-changes-api.mdx#connection-interface)
+ * [Subscribing](../../client-api/changes/what-is-changes-api.mdx#subscribing)
+ * [Unsubscribing](../../client-api/changes/what-is-changes-api.mdx#unsubscribing)
+ * [FAQ](../../client-api/changes/what-is-changes-api.mdx#faq)
+ * [Changes API and Database Timeout](../../client-api/changes/what-is-changes-api.mdx#changes-api-and-database-timeout)
+ * [Changes API and Method Overloads](../../client-api/changes/what-is-changes-api.mdx#changes-api-and-method-overloads)
+ * [Changes API -vs- Data Subscriptions](../../client-api/changes/what-is-changes-api.mdx#changes-api--vs--data-subscriptions)
+
+## Accessing Changes API
+
+The changes subscription is accessible by a document store through its `IDatabaseChanges` interface.
+
+
+
+{`store.changes([database]);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **database** | `string` | Name of database to open changes API for. If `null`, the `database` configured in DocumentStore will be used. |
+
+| Return value | |
+| ------------- | ----- |
+| `IDatabaseChanges` object | Instance implementing IDatabaseChanges interface. |
+
+
+
+## Connection interface
+
+Changes object interface extends `IConnectableChanges` interface that represents the connection. It exposes the following properties, methods and events.
+
+| Properties and methods | | |
+| ------------- | ------------- | ----- |
+| **connected** | boolean | Indicates whether it's connected or not |
+| **on("connectionStatus")** | method | Adds a listener for 'connectionStatus' event |
+| **on("error")** | method | Adds a listener for 'error' event |
+| **ensureConnectedNow()** | method | Returns a `Promise` resolved once connection to the server is established. |
+
+
+
+## Subscribing
+
+To receive notifications regarding server-side events, subscribe using one of the following methods.
+
+- [forAllDocuments()](../../client-api/changes/how-to-subscribe-to-document-changes.mdx#foralldocuments)
+- [forAllIndexes()](../../client-api/changes/how-to-subscribe-to-index-changes.mdx#forallindexes)
+- [forAllOperations()](../../client-api/changes/how-to-subscribe-to-operation-changes.mdx#foralloperations)
+- [forDocument()](../../client-api/changes/how-to-subscribe-to-document-changes.mdx#fordocument)
+- [forDocumentsInCollection()](../../client-api/changes/how-to-subscribe-to-document-changes.mdx#fordocumentsincollection)
+- [forDocumentsStartingWith()](../../client-api/changes/how-to-subscribe-to-document-changes.mdx#fordocumentsstartingwith)
+- [forIndex()](../../client-api/changes/how-to-subscribe-to-index-changes.mdx#forindex)
+- [forOperationId()](../../client-api/changes/how-to-subscribe-to-operation-changes.mdx#foroperation)
+
+
+
+## Unsubscribing
+
+To end a subscription (stop listening for particular notifications) use `dispose`.
+
+
+
+{`const changes = store.changes();
+
+await changes.ensureConnectedNow();
+
+const allDocsChanges = changes.forAllDocuments()
+ .on("data", change => \{
+ console.log(change.type + " on document " + change.id);
+ \})
+ .on("error", err => \{
+ // handle error
+ \});
+
+// ...
+
+try \{
+ // application code here
+\} finally \{
+ // dispose changes after use
+ if (changes != null) \{
+ changes.dispose();
+ \}
+\}
+`}
+
+
+
+
+
+## FAQ
+
+#### Changes API and Database Timeout
+
+One or more open Changes API connections will prevent a database from becoming
+idle and unloaded, regardless of [the configuration value for database idle timeout](../../server/configuration/database-configuration.mdx#databasesmaxidletimeinsec).
+#### Changes API and Method Overloads
+
+
+To get more method overloads, especially ones supporting **delegates**, please add the
+[System.Reactive.Core](https://www.nuget.org/packages/System.Reactive.Core/) package to your project.
+
+
+
+
+## Changes API -vs- Data Subscriptions
+
+**Changes API** and [Data Subscription](../../client-api/data-subscriptions/what-are-data-subscriptions.mdx)
+are services that a RavenDB Server provides subscribing clients.
+Both services respond to events that take place on the server, by sending updates
+to their subscribers.
+
+* **Changes API is a Push Notifications Service**.
+ * Changes API subscribers receive **notifications** regarding events that
+ took place on the server, without receiving the actual data entities
+ affected by these events.
+ For the modification of a document, for example, the client will receive
+ a [DocumentChange](../../client-api/changes/how-to-subscribe-to-document-changes.mdx#documentchange)
+ object with details like the document's ID and collection name.
+
+ * The server does **not** keep track of sent notifications or
+ checks clients' usage of them. It is a client's responsibility
+ to manage its reactions to such notifications.
+
+* **Data Subscription is a Data Consumption Service**.
+ * A Data Subscription task keeps track of document modifications in the
+ database and delivers the documents in an orderly fashion when subscribers
+ indicate they are ready to receive them.
+ * The process is fully managed by the server, leaving very little for
+ the subscribers to do besides consuming the delivered documents.
+
+| | Data Subscriptions | Changes API |
+|------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| What can the server Track | [Documents](../../client-api/data-subscriptions/what-are-data-subscriptions.mdx#documents-processing) [Revisions](../../client-api/data-subscriptions/advanced-topics/subscription-with-revisioning.mdx) [Counters](../../client-api/data-subscriptions/creation/examples.mdx#including-counters) Time Series | [Documents](../../client-api/changes/how-to-subscribe-to-document-changes.mdx) [Indexes](../../client-api/changes/how-to-subscribe-to-index-changes.mdx) [Operations](../../client-api/changes/how-to-subscribe-to-operation-changes.mdx) [Counters](../../client-api/changes/how-to-subscribe-to-counter-changes.mdx) [Time Series](../../client-api/changes/how-to-subscribe-to-time-series-changes.mdx) |
+| What can the server Deliver | Documents Revisions Counters Time Series | Notifications |
+| Management | Managed by the Server | Managed by the Client |
diff --git a/versioned_docs/version-7.1/client-api/changes/how-to-subscribe-to-counter-changes.mdx b/versioned_docs/version-7.1/client-api/changes/how-to-subscribe-to-counter-changes.mdx
new file mode 100644
index 0000000000..eab5ff976d
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/changes/how-to-subscribe-to-counter-changes.mdx
@@ -0,0 +1,34 @@
+---
+title: "Changes API: How to Subscribe to Counter Changes"
+hide_table_of_contents: true
+sidebar_label: How to Subscribe to Counter Changes
+sidebar_position: 4
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import HowToSubscribeToCounterChangesCsharp from './_how-to-subscribe-to-counter-changes-csharp.mdx';
+import HowToSubscribeToCounterChangesJava from './_how-to-subscribe-to-counter-changes-java.mdx';
+
+export const supportedLanguages = ["csharp", "java"];
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/changes/how-to-subscribe-to-document-changes.mdx b/versioned_docs/version-7.1/client-api/changes/how-to-subscribe-to-document-changes.mdx
new file mode 100644
index 0000000000..6cedd947fc
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/changes/how-to-subscribe-to-document-changes.mdx
@@ -0,0 +1,39 @@
+---
+title: "Changes API: How to Subscribe to Document Changes"
+hide_table_of_contents: true
+sidebar_label: How to Subscribe to Document Changes
+sidebar_position: 1
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import HowToSubscribeToDocumentChangesCsharp from './_how-to-subscribe-to-document-changes-csharp.mdx';
+import HowToSubscribeToDocumentChangesJava from './_how-to-subscribe-to-document-changes-java.mdx';
+import HowToSubscribeToDocumentChangesNodejs from './_how-to-subscribe-to-document-changes-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/changes/how-to-subscribe-to-index-changes.mdx b/versioned_docs/version-7.1/client-api/changes/how-to-subscribe-to-index-changes.mdx
new file mode 100644
index 0000000000..a0936f7f7f
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/changes/how-to-subscribe-to-index-changes.mdx
@@ -0,0 +1,38 @@
+---
+title: "Changes API: How to Subscribe to Index Changes"
+hide_table_of_contents: true
+sidebar_label: How to Subscribe to Index Changes
+sidebar_position: 2
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import HowToSubscribeToIndexChangesCsharp from './_how-to-subscribe-to-index-changes-csharp.mdx';
+import HowToSubscribeToIndexChangesJava from './_how-to-subscribe-to-index-changes-java.mdx';
+import HowToSubscribeToIndexChangesNodejs from './_how-to-subscribe-to-index-changes-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/changes/how-to-subscribe-to-operation-changes.mdx b/versioned_docs/version-7.1/client-api/changes/how-to-subscribe-to-operation-changes.mdx
new file mode 100644
index 0000000000..6eb458757a
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/changes/how-to-subscribe-to-operation-changes.mdx
@@ -0,0 +1,38 @@
+---
+title: "Changes API: How to Subscribe to Operation Changes"
+hide_table_of_contents: true
+sidebar_label: How to Subscribe to Operation Changes
+sidebar_position: 3
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import HowToSubscribeToOperationChangesCsharp from './_how-to-subscribe-to-operation-changes-csharp.mdx';
+import HowToSubscribeToOperationChangesJava from './_how-to-subscribe-to-operation-changes-java.mdx';
+import HowToSubscribeToOperationChangesNodejs from './_how-to-subscribe-to-operation-changes-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/changes/how-to-subscribe-to-time-series-changes.mdx b/versioned_docs/version-7.1/client-api/changes/how-to-subscribe-to-time-series-changes.mdx
new file mode 100644
index 0000000000..df37e449d0
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/changes/how-to-subscribe-to-time-series-changes.mdx
@@ -0,0 +1,29 @@
+---
+title: "Changes API: How to Subscribe to Time Series Changes"
+hide_table_of_contents: true
+sidebar_label: How to Subscribe to Time Series Changes
+sidebar_position: 5
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import HowToSubscribeToTimeSeriesChangesCsharp from './_how-to-subscribe-to-time-series-changes-csharp.mdx';
+
+export const supportedLanguages = ["csharp"];
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/changes/what-is-changes-api.mdx b/versioned_docs/version-7.1/client-api/changes/what-is-changes-api.mdx
new file mode 100644
index 0000000000..b4d4dfbb0c
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/changes/what-is-changes-api.mdx
@@ -0,0 +1,45 @@
+---
+title: "What Is the Changes API"
+hide_table_of_contents: true
+sidebar_label: What is Changes API
+sidebar_position: 0
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import WhatIsChangesApiCsharp from './_what-is-changes-api-csharp.mdx';
+import WhatIsChangesApiJava from './_what-is-changes-api-java.mdx';
+import WhatIsChangesApiNodejs from './_what-is-changes-api-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/cluster/_category_.json b/versioned_docs/version-7.1/client-api/cluster/_category_.json
new file mode 100644
index 0000000000..6cd44859ac
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/cluster/_category_.json
@@ -0,0 +1,4 @@
+{
+ "position": 16,
+ "label": Cluster Related,
+}
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/cluster/_document-conflicts-in-client-side-csharp.mdx b/versioned_docs/version-7.1/client-api/cluster/_document-conflicts-in-client-side-csharp.mdx
new file mode 100644
index 0000000000..884d7efbfa
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/cluster/_document-conflicts-in-client-side-csharp.mdx
@@ -0,0 +1,105 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+## What are conflicts?
+When two or more changes of a single document are done concurrently in two separate nodes,
+RavenDB cannot know which one of the changes is the correct one. This is called document conflict.
+For more information about conflicts and their resolution, see [article about conflicts](../../server/clustering/replication/replication-conflicts.mdx).
+
+
+By default, RavenDB will solve conflicts using "resolve to latest" strategy, thus the conflict will be resolved to a document with the latest 'modified date'.
+
+
+## When is a conflict exception thrown?
+DocumentConflictException will be thrown for any access of a conflicted document.
+Fetching attachments of a conflicted document will throw `InvalidOperationException` on the server.
+
+## How can the conflict can be resolved from the client side?
+ * PUT of a document with ID that belongs to conflicted document will resolve the conflict.
+
+
+
+{`using (var session = store.OpenSession())
+\{
+ session.Store(new User \{Name = "John Doe"\}, "users/123"); // users/123 is a conflicted document
+ session.SaveChanges(); //when this request is finished, the conflict for users/132 is resolved.
+\}
+`}
+
+
+
+ * DELETE of a conflicted document will resolve its conflict.
+
+
+
+{`using (var session = store.OpenSession())
+\{
+ session.Delete("users/123"); // users/123 is a conflicted document
+ session.SaveChanges(); //when this request is finished, the conflict for users/132 is resolved.
+\}
+`}
+
+
+
+ * Incoming replication will resolve conflict if the incoming document has a larger [change vector](../../server/clustering/replication/change-vector.mdx).
+
+## Modifying conflict resolution from the client-side
+In RavenDB we can resolve conflicts either by resolving to the latest or by using a conflict resolution script to decide which one of the conflicted document variants are the ones that need to be kept. The following is an example of how we can set a conflict resolution script from the client-side.
+
+
+{`using (var documentStore = new DocumentStore
+\{
+ Urls = new []\{ "http://" \},
+ Database = ""
+\})
+\{
+ var resolveByCollection = new Dictionary
+ \{
+ \{
+ "ShoppingCarts", new ScriptResolver //specify conflict resolution for collection
+ \{
+ // conflict resolution script is written in javascript
+ Script = @"
+ var final = docs[0];
+ for(var i = 1; i < docs.length; i++)
+ \{
+ var currentCart = docs[i];
+ for(var j = 0; j < currentCart.Items.length; j++)
+ \{
+ var item = currentCart.Items[j];
+ var match = final.Items
+ .find( i => i.ProductId == item.ProductId);
+ if(!match)
+ \{
+ // not in cart, add
+ final.Items.push(item);
+ \}
+ else
+ \{
+ match.Quantity = Math.max(
+ item.Quantity ,
+ match.Quantity);
+ \}
+ \}
+ \}
+ return final; // the conflict will be resolved to this variant
+ "
+ \}
+ \}
+ \};
+
+ var op = new ModifyConflictSolverOperation(
+ documentStore.Database,
+ resolveByCollection, //we specify conflict resolution scripts by document collection
+ resolveToLatest: true); // if true, RavenDB will resolve conflict to the latest
+ // if there is no resolver defined for a given collection or
+ // the script returns null
+
+ await documentStore.Maintenance.Server.SendAsync(op);
+\}
+`}
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/cluster/_document-conflicts-in-client-side-java.mdx b/versioned_docs/version-7.1/client-api/cluster/_document-conflicts-in-client-side-java.mdx
new file mode 100644
index 0000000000..3ae2563dab
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/cluster/_document-conflicts-in-client-side-java.mdx
@@ -0,0 +1,97 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+## What are conflicts?
+When two or more changes of a single document are done concurrently in two separate nodes,
+RavenDB cannot know which one of the changes is the correct one. This is called document conflict.
+For more information about conflicts and their resolution, see [article about conflicts](../../server/clustering/replication/replication-conflicts.mdx).
+
+
+By default, RavenDB will solve conflicts using "resolve to latest" strategy, thus the conflict will be resolved to a document with the latest 'modified date'.
+
+
+## When is a conflict exception thrown?
+DocumentConflictException will be thrown for any access of a conflicted document.
+Fetching attachments of a conflicted document will throw `InvalidOperationException` on the server.
+
+## How can the conflict can be resolved from the client side?
+ * PUT of a document with ID that belongs to conflicted document will resolve the conflict.
+
+
+
+{`try (IDocumentSession session = store.openSession()) \{
+ User user = new User();
+ user.setName("John Doe");
+
+ session.store(user, "users/123");
+ // users/123 is a conflicted document
+ session.saveChanges();
+ // when this request is finished, the conflict for user/132 is resolved.
+\}
+`}
+
+
+
+ * DELETE of a conflicted document will resolve its conflict.
+
+
+
+{`try (IDocumentSession session = store.openSession()) \{
+ session.delete("users/123"); // users/123 is a conflicted document
+ session.saveChanges(); //when this request is finished, the conflict for users/132 is resolved.
+\}
+`}
+
+
+
+ * Incoming replication will resolve conflict if the incoming document has a larger [change vector](../../server/clustering/replication/change-vector.mdx).
+
+## Modifying conflict resolution from the client-side
+In RavenDB we can resolve conflicts either by resolving to the latest or by using a conflict resolution script to decide which one of the conflicted document variants are the ones that need to be kept. The following is an example of how we can set a conflict resolution script from the client-side.
+
+
+{`try (IDocumentStore documentStore = new DocumentStore(
+ new String[] \{ "http://" \}, "")) \{
+
+ Map resolveByCollection = new HashMap<>();
+ ScriptResolver scriptResolver = new ScriptResolver();
+ scriptResolver.setScript(
+ " var final = docs[0];" +
+ " for(var i = 1; i < docs.length; i++)" +
+ " \{" +
+ " var currentCart = docs[i];" +
+ " for(var j = 0; j < currentCart.Items.length; j++)" +
+ " \{" +
+ " var item = currentCart.Items[j];" +
+ " var match = final.Items" +
+ " .find( i => i.ProductId == item.ProductId);" +
+ " if (!match)" +
+ " \{" +
+ " // not in cart, add" +
+ " final.Items.push(item);" +
+ " \} else \{ " +
+ " match.Quantity = Math.max(" +
+ " item.Quantity ," +
+ " match.Quantity);" +
+ " \}" +
+ " \}" +
+ " \}" +
+ " return final; // the conflict will be resolved to this variant");
+ resolveByCollection.put("ShoppingCarts", scriptResolver);
+
+ ModifyConflictSolverOperation op = new ModifyConflictSolverOperation(
+ documentStore.getDatabase(),
+ resolveByCollection, //we specify conflict resolution scripts by document collection
+ true // if true, RavenDB will resolve conflict to the latest
+ // if there is no resolver defined for a given collection or
+ // the script returns null
+ );
+
+ store.maintenance().server().send(op);
+\}
+`}
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/cluster/_how-client-integrates-with-replication-and-cluster-csharp.mdx b/versioned_docs/version-7.1/client-api/cluster/_how-client-integrates-with-replication-and-cluster-csharp.mdx
new file mode 100644
index 0000000000..b27fc3cdf6
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/cluster/_how-client-integrates-with-replication-and-cluster-csharp.mdx
@@ -0,0 +1,131 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* In this page:
+ * [Failover behavior](../../client-api/cluster/how-client-integrates-with-replication-and-cluster.mdx#failover-behavior)
+ * [Cluster topology in the client](../../client-api/cluster/how-client-integrates-with-replication-and-cluster.mdx#cluster-topology-in-the-client)
+ * [Topology discovery](../../client-api/cluster/how-client-integrates-with-replication-and-cluster.mdx#topology-discovery)
+ * [Configuring topology nodes](../../client-api/cluster/how-client-integrates-with-replication-and-cluster.mdx#configuring-topology-nodes)
+ * [Write assurance and database groups](../../client-api/cluster/how-client-integrates-with-replication-and-cluster.mdx#write-assurance-and-database-groups)
+
+
+## Failover behavior
+
+* In RavenDB, the replication is _not_ a bundle and is always enabled if there are two nodes or more in the cluster.
+ This means that the failover mechanism is always turned on by default.
+
+* The client contains a list of cluster nodes per database group.
+ Each time the client needs to do a request to a database, it will choose a node that contains this database from this list to send the request to.
+ If the node is down and the request fails, it will select another node from this list.
+
+* The choice of which node to select depends on the `ReadBalanceBehavior` and `LoadBalanceBehavior` configuration values.
+ For more information about the different values and the node selection process, see [Load balancing client requests](../../client-api/configuration/load-balance/overview.mdx).
+
+
+ Each failure to connect to a node spawns a health check for that node.
+ For more information see [Cluster Node Health Check](health-check).
+
+
+
+
+## Cluster topology in the client
+
+When the client is initialized, it fetches the topologies and populates the nodes list for the load-balancing and failover functionality.
+During the lifetime of a RavenDB Client object, it periodically receives the cluster and the databases topologies from the server.
+The **topology** is updated with the following logic:
+
+* Each topology has an etag, which is a number
+* Each time the topology has changed, the etag is incremented
+* For each request, the client adds the latest topology etag it has to the header
+* If the current topology etag at the server is higher than the one in the client, the server adds `"Refresh-Topology:true"` to the response header
+* If a client detects the `"Refresh-Topology:true"` header in the response, the client will fetch the updated topology from the server.
+ Note: if `ReadBalanceBehavior.FastestNode` is selected, the client will schedule a speed test to determine the fastest node.
+* In addition, every 5 minutes, the client fetches the current topology from the server if no requests are made within that time frame.
+The **client configuration** is handled in a similar way:
+
+* Each client configuration has an etag attached
+* Each time the configuration has changed at the server-side, the server adds `"Refresh-Client-Configuration"` to the response
+* When the client detects the aforementioned header in the response, it schedules fetching the new configuration
+
+
+
+## Topology discovery
+
+In RavenDB, the cluster topology has an etag that increments with each topology change.
+
+#### How and when the topology is updated:
+
+* The first time any request is sent to RavenDB server, the client fetches cluster topology
+* Each subsequent request happens with a fetched topology etag in the HTTP headers, under the key `Topology-Etag`
+* If the response contains the `Refresh-Topology: true` header, then a thread responsible for updating the topology will be spawned
+
+
+
+## Configuring topology nodes
+
+Listing any node in the initialization of the cluster in the client is enough to be able to properly connect to the specified database.
+Each node in the cluster contains the full topology of all databases and all nodes that are in the cluster.
+Nevertheless, it is possible to specify multiple node urls at the initialization. But why list multiple nodes in the cluster, if url of any cluster node will do?
+
+By listing multiple nodes in the cluster, we can ensure that if a single node is down and we bring a new client up, we'll still be able to get the initial topology.
+If the cluster sizes are small (three to five nodes), we'll typically list all the nodes in the cluster.
+But for larger clusters, we'll usually just list enough nodes that having them all go down at once will mean that you have more pressing concerns then a new client coming up.
+
+
+
+{`using (var store = new DocumentStore
+\{
+ Database = "TestDB",
+ Urls = new [] \{
+ "http://[node A url]",
+ "http://[node B url]",
+ "http://[node C url]"
+ \}
+\})
+\{
+ store.Initialize();
+
+ // the rest of ClientAPI code
+\}
+`}
+
+
+
+
+
+## Write assurance and database groups
+
+In RavenDB clusters, the databases are hosted in [database groups](../../glossary/database-group.mdx).
+Since there is a master-master replication configured between database group members, a write to one of the nodes will be replicated to all other instances of the group.
+If there are some writes that are important, it is possible to make the client wait until the transaction data gets replicated to multiple nodes.
+It is called a 'write assurance', and it is available with the `WaitForReplicationAfterSaveChanges()` method.
+
+
+
+{`using (var session = store.OpenSession())
+\{
+ var user = new User
+ \{
+ Name = "John Dow"
+ \};
+
+ session.Store(user);
+
+ //make sure that the comitted data is replicated to 2 nodes
+ //before returning from the SaveChanges() call.
+ session.Advanced
+ .WaitForReplicationAfterSaveChanges(replicas: 2);
+
+ session.SaveChanges();
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/cluster/_how-client-integrates-with-replication-and-cluster-java.mdx b/versioned_docs/version-7.1/client-api/cluster/_how-client-integrates-with-replication-and-cluster-java.mdx
new file mode 100644
index 0000000000..a9b44f2816
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/cluster/_how-client-integrates-with-replication-and-cluster-java.mdx
@@ -0,0 +1,125 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* In this page:
+ * [Failover behavior](../../client-api/cluster/how-client-integrates-with-replication-and-cluster.mdx#failover-behavior)
+ * [Cluster topology in the client](../../client-api/cluster/how-client-integrates-with-replication-and-cluster.mdx#cluster-topology-in-the-client)
+ * [Topology discovery](../../client-api/cluster/how-client-integrates-with-replication-and-cluster.mdx#topology-discovery)
+ * [Configuring topology nodes](../../client-api/cluster/how-client-integrates-with-replication-and-cluster.mdx#configuring-topology-nodes)
+ * [Write assurance and database groups](../../client-api/cluster/how-client-integrates-with-replication-and-cluster.mdx#write-assurance-and-database-groups)
+
+
+## Failover behavior
+
+* In RavenDB, the replication is _not_ a bundle and is always enabled if there are two nodes or more in the cluster.
+ This means that the failover mechanism is always turned on by default.
+
+* The client contains a list of cluster nodes per database group.
+ Each time the client needs to do a request to a database, it will choose a node that contains this database from this list to send the request to.
+ If the node is down and the request fails, it will select another node from this list.
+
+* The choice of which node to select depends on the `ReadBalanceBehavior` and `LoadBalanceBehavior` configuration values.
+ For more information about the different values and the node selection process, see [Load balancing client requests](../../client-api/configuration/load-balance/overview.mdx).
+
+
+ Each failure to connect to a node spawns a health check for that node.
+ For more information see [Cluster Node Health Check](health-check).
+
+
+
+
+## Cluster topology in the client
+
+When the client is initialized, it fetches the topologies and populates the nodes list for the load-balancing and failover functionality.
+During the lifetime of a RavenDB Client object, it periodically receives the cluster and the databases topologies from the server.
+The **topology** is updated with the following logic:
+
+* Each topology has an etag, which is a number
+* Each time the topology has changed, the etag is incremented
+* For each request, the client adds the latest topology etag it has to the header
+* If the current topology etag at the server is higher than the one in the client, the server adds `"Refresh-Topology:true"` to the response header
+* If a client detects the `"Refresh-Topology:true"` header in the response, the client will fetch the updated topology from the server.
+ Note: if `ReadBalanceBehavior.FASTEST_NODE` is selected, the client will schedule a speed test to determine the fastest node.
+* In addition, every 5 minutes, the client fetches the current topology from the server if no requests are made within that time frame.
+The **client configuration** is handled in a similar way:
+
+* Each client configuration has an etag attached
+* Each time the configuration has changed at the server-side, the server adds `"Refresh-Client-Configuration"` to the response
+* When the client detects the aforementioned header in the response, it schedules fetching the new configuration
+
+
+
+## Topology discovery
+
+In RavenDB, the cluster topology has an etag that increments with each topology change.
+
+#### How and when the topology is updated:
+
+* The first time any request is sent to RavenDB server, the client fetches cluster topology
+* Each subsequent request happens with a fetched topology etag in the HTTP headers, under the key `Topology-Etag`
+* If the response contains the `Refresh-Topology: true` header, then a thread responsible for updating the topology will be spawned
+
+
+
+## Configuring topology nodes
+
+Listing any node in the initialization of the cluster in the client is enough to be able to properly connect to the specified database.
+Each node in the cluster contains the full topology of all databases and all nodes that are in the cluster.
+Nevertheless, it is possible to specify multiple node urls at the initialization. But why list multiple nodes in the cluster, if url of any cluster node will do?
+
+By listing multiple nodes in the cluster, we can ensure that if a single node is down and we bring a new client up, we'll still be able to get the initial topology.
+If the cluster sizes are small (three to five nodes), we'll typically list all the nodes in the cluster.
+But for larger clusters, we'll usually just list enough nodes that having them all go down at once will mean that you have more pressing concerns then a new client coming up.
+
+
+
+{`try (IDocumentStore store = new DocumentStore(new String[]\{
+ "http://[node A url]",
+ "http://[node B url]",
+ "http://[node C url]"
+\}, "TestDB")) \{
+
+
+ store.initialize();
+
+ // the rest of ClientAPI code
+\}
+`}
+
+
+
+
+
+## Write assurance and database groups
+
+In RavenDB clusters, the databases are hosted in database groups.
+Since there is a master-master replication configured between database group members, a write to one of the nodes will be replicated to all other instances of the group.
+If there are some writes that are important, it is possible to make the client wait until the transaction data gets replicated to multiple nodes.
+It is called a 'write assurance', and it is available with the `WaitForReplicationAfterSaveChanges()` method.
+
+
+
+{`try (IDocumentSession session = store.openSession()) \{
+ User user = new User();
+ user.setName("John Dow");
+
+ session.store(user);
+
+ //make sure that the comitted data is replicated to 2 nodes
+ //before returning from the saveChanges() call.
+ session
+ .advanced()
+ .waitForReplicationAfterSaveChanges();
+\}
+`}
+
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/cluster/_how-client-integrates-with-replication-and-cluster-nodejs.mdx b/versioned_docs/version-7.1/client-api/cluster/_how-client-integrates-with-replication-and-cluster-nodejs.mdx
new file mode 100644
index 0000000000..c844bea78e
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/cluster/_how-client-integrates-with-replication-and-cluster-nodejs.mdx
@@ -0,0 +1,120 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* In this page:
+ * [Failover behavior](../../client-api/cluster/how-client-integrates-with-replication-and-cluster.mdx#failover-behavior)
+ * [Cluster topology in the client](../../client-api/cluster/how-client-integrates-with-replication-and-cluster.mdx#cluster-topology-in-the-client)
+ * [Topology discovery](../../client-api/cluster/how-client-integrates-with-replication-and-cluster.mdx#topology-discovery)
+ * [Configuring topology nodes](../../client-api/cluster/how-client-integrates-with-replication-and-cluster.mdx#configuring-topology-nodes)
+ * [Write assurance and database groups](../../client-api/cluster/how-client-integrates-with-replication-and-cluster.mdx#write-assurance-and-database-groups)
+
+
+## Failover behavior
+
+* In RavenDB, the replication is _not_ a bundle and is always enabled if there are two nodes or more in the cluster.
+ This means that the failover mechanism is always turned on by default.
+
+* The client contains a list of cluster nodes per database group.
+ Each time the client needs to do a request to a database, it will choose a node that contains this database from this list to send the request to.
+ If the node is down and the request fails, it will select another node from this list.
+
+* The choice of which node to select depends on the `ReadBalanceBehavior` and `LoadBalanceBehavior` configuration values.
+ For more information about the different values and the node selection process, see [Load balancing client requests](../../client-api/configuration/load-balance/overview.mdx).
+
+
+ Each failure to connect to a node spawns a health check for that node.
+ For more information see [Cluster Node Health Check](health-check).
+
+
+
+
+## Cluster topology in the client
+
+When the client is initialized, it fetches the topologies and populates the nodes list for the load-balancing and failover functionality.
+During the lifetime of a RavenDB Client object, it periodically receives the cluster and the databases topologies from the server.
+The **topology** is updated with the following logic:
+
+* Each topology has an etag, which is a number
+* Each time the topology has changed, the etag is incremented
+* For each request, the client adds the latest topology etag it has to the header
+* If the current topology etag at the server is higher than the one in the client, the server adds `"Refresh-Topology: true"` to the response header
+* If a client detects the `"Refresh-Topology: true"` header in the response, the client will fetch the updated topology from the server.
+ Note: if `ReadBalanceBehavior` `FastestNode` is selected, the client will schedule a speed test to determine the fastest node.
+* In addition, every 5 minutes, the client fetches the current topology from the server if no requests are made within that time frame.
+The **client configuration** is handled in a similar way:
+
+* Each client configuration has an etag attached
+* Each time the configuration has changed at the server-side, the server adds `"Refresh-Client-Configuration"` to the response
+* When the client detects the aforementioned header in the response, it schedules fetching the new configuration
+
+
+
+## Topology discovery
+
+In RavenDB, the cluster topology has an etag that increments with each topology change.
+
+#### How and when the topology is updated:
+
+* The first time any request is sent to RavenDB server, the client fetches cluster topology
+* Each subsequent request happens with a fetched topology etag in the HTTP headers, under the key `Topology-Etag`
+* If the response contains the `Refresh-Topology: true` header, then a thread responsible for updating the topology will be spawned
+
+
+
+## Configuring topology nodes
+
+Listing any node in the initialization of the cluster in the client is enough to be able to properly connect to the specified database.
+Each node in the cluster contains the full topology of all databases and all nodes that are in the cluster.
+Nevertheless, it is possible to specify multiple node urls at the initialization. But why list multiple nodes in the cluster, if url of any cluster node will do?
+
+By listing multiple nodes in the cluster, we can ensure that if a single node is down and we bring a new client up, we'll still be able to get the initial topology.
+If the cluster sizes are small (three to five nodes), we'll typically list all the nodes in the cluster.
+But for larger clusters, we'll usually just list enough nodes that having them all go down at once will mean that you have more pressing concerns then a new client coming up.
+
+
+
+{`const store = new DocumentStore([
+ "http://[node A url]",
+ "http://[node B url]",
+ "http://[node C url]"
+], "TestDB");
+
+store.initialize();
+
+// the rest of ClientAPI code
+`}
+
+
+
+
+
+## Write assurance and database groups
+
+In RavenDB clusters, the databases are hosted in database groups.
+Since there is a master-master replication configured between database group members, a write to one of the nodes will be replicated to all other instances of the group.
+If there are some writes that are important, it is possible to make the client wait until the transaction data gets replicated to multiple nodes.
+It is called a 'write assurance', and it is available with the `waitForReplicationAfterSaveChanges()` method.
+
+
+
+{`const session = store.openSession();
+const user = new User("John Doe");
+
+await session.store(user);
+
+//make sure that the comitted data is replicated to 2 nodes
+//before returning from the saveChanges() call.
+session
+ .advanced
+ .waitForReplicationAfterSaveChanges();
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/cluster/document-conflicts-in-client-side.mdx b/versioned_docs/version-7.1/client-api/cluster/document-conflicts-in-client-side.mdx
new file mode 100644
index 0000000000..5b64f9d83e
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/cluster/document-conflicts-in-client-side.mdx
@@ -0,0 +1,29 @@
+---
+title: "Cluster: Document Conflicts in Client-side"
+hide_table_of_contents: true
+sidebar_label: Document Conflict Exceptions at Client-Side
+sidebar_position: 1
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import DocumentConflictsInClientSideCsharp from './_document-conflicts-in-client-side-csharp.mdx';
+import DocumentConflictsInClientSideJava from './_document-conflicts-in-client-side-java.mdx';
+
+export const supportedLanguages = ["csharp", "java"];
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/cluster/health-check.mdx b/versioned_docs/version-7.1/client-api/cluster/health-check.mdx
new file mode 100644
index 0000000000..eb7431bc90
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/cluster/health-check.mdx
@@ -0,0 +1,25 @@
+---
+title: "Cluster: Cluster Node Health Check"
+hide_table_of_contents: true
+sidebar_label: Cluster Node Health Check
+sidebar_position: 3
+---
+
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+# Cluster: Cluster Node Health Check
+
+A health check sends an HTTP request to the `/databases/[Database Name]/stats` endpoint.
+If the request is successful, it will reset node failure counters which will cause the client to try sending operations to that specific node again.
+
+### When Does it Trigger?
+
+Any time a low-level [operation](../operations/what-are-operations.mdx) fails to connect to a node, the client spawns a health check thread for that particular node.
+The thread will periodically ping the not responding server until it gets a proper response.
+The frequency of pinging the non-responsive server will start from 100ms and will gradually increase until it reaches 5sec intervals.
+
diff --git a/versioned_docs/version-7.1/client-api/cluster/how-client-integrates-with-replication-and-cluster.mdx b/versioned_docs/version-7.1/client-api/cluster/how-client-integrates-with-replication-and-cluster.mdx
new file mode 100644
index 0000000000..385bb13c7c
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/cluster/how-client-integrates-with-replication-and-cluster.mdx
@@ -0,0 +1,42 @@
+---
+title: "Client Integration with the Cluster"
+hide_table_of_contents: true
+sidebar_label: Client Integration with the Cluster
+sidebar_position: 0
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import HowClientIntegratesWithReplicationAndClusterCsharp from './_how-client-integrates-with-replication-and-cluster-csharp.mdx';
+import HowClientIntegratesWithReplicationAndClusterJava from './_how-client-integrates-with-replication-and-cluster-java.mdx';
+import HowClientIntegratesWithReplicationAndClusterNodejs from './_how-client-integrates-with-replication-and-cluster-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/cluster/speed-test.mdx b/versioned_docs/version-7.1/client-api/cluster/speed-test.mdx
new file mode 100644
index 0000000000..c505a18751
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/cluster/speed-test.mdx
@@ -0,0 +1,40 @@
+---
+title: "Cluster: Speed Test"
+hide_table_of_contents: true
+sidebar_label: Client Speed Test
+sidebar_position: 2
+---
+
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+# Cluster: Speed Test
+
+
+* In RavenDB Client API, if the [Read Balance Behavior](../../client-api/configuration/load-balance/read-balance-behavior.mdx) is configured for the _Fastest Node_,
+ then under certain conditions, the client executes a `Speed Test` for each node in the cluster so that the fastest node can be accessed for ***Read*** requests.
+
+* When doing a `Speed Test`, the client checks the response time from all the nodes in the topology.
+ This is done per 'Read' request that is executed.
+
+* Once the Speed Test is finished, the client stores the fastest node found.
+ After that, the speed test will be repeated every minute.
+
+## When does the Speed Test Trigger?
+
+The Speed Test is triggered in the following cases:
+
+* When the client configuration has changed to `FastestNode`
+ Once the client configuration is updated on the server, the next response from the server to the client will include the following header: `Refresh-Client-Configuration`.
+ When the client sees such a header for the first time, it will start the Speed Test - if indeed configuration is set to _FastestNode_.
+
+* Every 5 minutes the client checks the server for the current nodes' topology.
+ At this periodic check, the Speed Test will be triggered if _FastestNode_ is set.
+
+* Any time when the nodes' topology changes, and again - only if _FastestNode_ is set.
+
+
diff --git a/versioned_docs/version-7.1/client-api/commands/_category_.json b/versioned_docs/version-7.1/client-api/commands/_category_.json
new file mode 100644
index 0000000000..053c45fac9
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/commands/_category_.json
@@ -0,0 +1,4 @@
+{
+ "position": 11,
+ "label": Commands,
+}
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/commands/_overview-csharp.mdx b/versioned_docs/version-7.1/client-api/commands/_overview-csharp.mdx
new file mode 100644
index 0000000000..c6e3277d64
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/commands/_overview-csharp.mdx
@@ -0,0 +1,227 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* RavenDB's Client API is structured in layers.
+ At the highest layer, you interact with the [document store](../../client-api/what-is-a-document-store.mdx) and the [document session](../../client-api/session/what-is-a-session-and-how-does-it-work.mdx),
+ which handle most common database tasks like loading, saving, and querying documents.
+
+* Beneath this high-level interface are Operations and Commands:
+
+ * **Operations**:
+
+ * Operations provide management functionality outside the session's context,
+ like creating a database, performing bulk actions, or managing server-wide configurations.
+
+ * Learn more about Operations in [what are Operations](../../client-api/operations/what-are-operations.mdx).
+
+ * **Commands**:
+
+ * All high-level methods and Operations are built on top of Commands.
+ Commands form the lowest-level operations that directly communicate with the server.
+
+ * For example, a session’s _Load_ method translates internally to a _LoadOperation_,
+ which ultimately relies on a _GetDocumentsCommand_ to fetch data from the server.
+
+ * Commands are responsible for sending the appropriate request to the server using a `Request Executor`,
+ and parsing the server's response.
+
+ * All commands can be executed using either the [Store's _Request Executor_](../../client-api/commands/overview.mdx#execute-command---using-the-store)
+ or the [Session's _Request Executor_](../../client-api/commands/overview.mdx#execute-command---using-the-session),
+ regardless of whether the command is session-related or not.
+
+* This layered structure lets you work at any level, depending on your needs.
+
+* In this page:
+ * [Execute command - using the Store Request Executor](../../client-api/commands/overview.mdx#execute-command---using-the-store-request-executor)
+ * [Execute command - using the Session Request Executor](../../client-api/commands/overview.mdx#execute-command---using-the-session-request-executor)
+ * [Available commands](../../client-api/commands/overview.mdx#available-commands)
+ * [Syntax](../../client-api/commands/overview.mdx#syntax)
+
+
+## Execute command - using the Store Request Executor
+
+This example shows how to execute the low-level `CreateSubscriptionCommand` via the **Store**.
+(For examples of creating a subscription using higher-level methods, see [subscription creation examples](../../client-api/data-subscriptions/creation/examples.mdx)).
+
+
+
+
+{`// Using the store object
+using (var store = new DocumentStore())
+// Allocate a context from the store's context pool for executing the command
+using (store.GetRequestExecutor().ContextPool.AllocateOperationContext(out var context))
+{
+ // Define a command
+ var cmd = new CreateSubscriptionCommand(store.Conventions,
+ new SubscriptionCreationOptions()
+ {
+ Name = "Orders subscription",
+ Query = "from Orders"
+ });
+
+ // Call 'Execute' on the store's Request Executor to send the command to the server,
+ // pass the command and the store context.
+ store.GetRequestExecutor().Execute(cmd, context);
+}
+`}
+
+
+
+
+{`// Using the store object
+using (var store = new DocumentStore())
+// Allocate a context from the store's context pool for executing the command
+using (store.GetRequestExecutor().ContextPool.AllocateOperationContext(out var context))
+{
+ // Define a command
+ var cmd = new CreateSubscriptionCommand(store.Conventions,
+ new SubscriptionCreationOptions()
+ {
+ Name = "Orders subscription",
+ Query = "from Orders"
+ });
+
+ // Call 'ExecuteAsync' on the store's Request Executor to send the command to the server,
+ // pass the command and the store context.
+ await store.GetRequestExecutor().ExecuteAsync(cmd, context);
+}
+`}
+
+
+
+
+
+
+## Execute command - using the Session Request Executor
+
+This example shows how to execute the low-level `GetDocumentsCommand` via the **Session**.
+(For loading a document using higher-level methods, see [loading entities](../../client-api/session/loading-entities.mdx)).
+
+
+
+
+{`// Using the session
+using (var session = store.OpenSession())
+{
+ // Define a command
+ var cmd = new GetDocumentsCommand(store.Conventions, "orders/1-A", null, false);
+
+ // Call 'Execute' on the session's Request Executor to send the command to the server
+ // Pass the command and the 'Session.Advanced.Context'
+ session.Advanced.RequestExecutor.Execute(cmd, session.Advanced.Context);
+
+ // Access the results
+ var blittable = (BlittableJsonReaderObject)cmd.Result.Results[0];
+
+ // Deserialize the blittable JSON into your typed object
+ var order = session.Advanced.JsonConverter.FromBlittable(ref blittable,
+ "orders/1-A", false);
+
+ // Now you have a strongly-typed Order object that can be accessed
+ var orderedAt = order.OrderedAt;
+}
+`}
+
+
+
+
+{`// Using the session
+using (var asyncSession = store.OpenAsyncSession())
+{
+ // Define a command
+ var cmd = new GetDocumentsCommand(store.Conventions, "orders/1-A", null, false);
+
+ // Call 'ExecuteAsync' on the session's Request Executor to send the command to the server
+ // Pass the command and the 'Session.Advanced.Context'
+ await asyncSession.Advanced.RequestExecutor.ExecuteAsync(cmd,
+ asyncSession.Advanced.Context);
+
+ // Access the results
+ var blittable = (BlittableJsonReaderObject)cmd.Result.Results[0];
+
+ // Deserialize the blittable JSON into your typed object
+ var order = asyncSession.Advanced.JsonConverter.FromBlittable(ref blittable,
+ "orders/1-A", true);
+
+ // Now you have a strongly-typed Order object that can be accessed
+ var orderedAt = order.OrderedAt;
+}
+`}
+
+
+
+
+* Note that the transaction created for the HTTP request when executing the command
+ is separate from the transaction initiated by the session's [SaveChanges](../../client-api/session/saving-changes.mdx) method,
+ even if both are called within the same code block.
+
+* Learn more about transactions in RavenDB in [Transaction support](../../client-api/faq/transaction-support.mdx).
+
+
+
+## Available commands
+
+* **The following low-level commands, which inherit from `RavenCommand`, are available**:
+
+ * ConditionalGetDocumentsCommand
+ * CreateSubscriptionCommand
+ * [DeleteDocumentCommand](../../client-api/commands/documents/delete.mdx)
+ * DeleteSubscriptionCommand
+ * DropSubscriptionConnectionCommand
+ * ExplainQueryCommand
+ * GetClusterTopologyCommand
+ * GetConflictsCommand
+ * GetDatabaseTopologyCommand
+ * [GetDocumentsCommand](../../client-api/commands/documents/get.mdx)
+ * GetIdentitiesCommand
+ * GetNextOperationIdCommand
+ * GetNodeInfoCommand
+ * GetOperationStateCommand
+ * GetRawStreamResultCommand
+ * GetRevisionsBinEntryCommand
+ * GetRevisionsCommand
+ * GetSubscriptionsCommand
+ * GetSubscriptionStateCommand
+ * GetTcpInfoCommand
+ * GetTrafficWatchConfigurationCommand
+ * HeadAttachmentCommand
+ * HeadDocumentCommand
+ * HiLoReturnCommand
+ * IsDatabaseLoadedCommand
+ * KillOperationCommand
+ * MultiGetCommand
+ * NextHiLoCommand
+ * NextIdentityForCommand
+ * [PutDocumentCommand](../../client-api/commands/documents/put.mdx)
+ * PutSecretKeyCommand
+ * QueryCommand
+ * QueryStreamCommand
+ * SeedIdentityForCommand
+ * [SingleNodeBatchCommand](../../client-api/commands/batches/how-to-send-multiple-commands-using-a-batch.mdx)
+ * WaitForRaftIndexCommand
+
+
+
+## Syntax
+
+
+
+{`void Execute(RavenCommand command,
+ JsonOperationContext context,
+ SessionInfo sessionInfo = null);
+
+Task ExecuteAsync(RavenCommand command,
+ JsonOperationContext context,
+ SessionInfo sessionInfo = null,
+ CancellationToken token = default(CancellationToken));
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/commands/_overview-java.mdx b/versioned_docs/version-7.1/client-api/commands/_overview-java.mdx
new file mode 100644
index 0000000000..5fac60b00a
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/commands/_overview-java.mdx
@@ -0,0 +1,127 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* RavenDB's Client API is structured in layers.
+ At the highest layer, you interact with the [document store](../../client-api/what-is-a-document-store.mdx) and the [document session](../../client-api/session/what-is-a-session-and-how-does-it-work.mdx),
+ which handle most common database tasks like loading, saving, and querying documents.
+
+* Beneath this high-level interface are Operations and Commands:
+
+ * **Operations**:
+
+ * Operations provide management functionality outside the session's context,
+ like creating a database, performing bulk actions, or managing server-wide configurations.
+
+ * Learn more about Operations in [what are Operations](../../client-api/operations/what-are-operations.mdx).
+
+ * **Commands**:
+
+ * All high-level methods and Operations are built on top of Commands.
+ Commands form the lowest-level operations that directly communicate with the server.
+
+ * For example, a session’s _Load_ method translates internally to a _LoadOperation_,
+ which ultimately relies on a _GetDocumentsCommand_ to fetch data from the server.
+
+ * Commands are responsible for sending the appropriate request to the server using a `Request Executor`,
+ and parsing the server's response.
+
+ * All commands can be executed using either the Store's _Request Executor_ or the Session's _Request Executor_,
+ regardless of whether the command is session-related or not.
+
+* This layered structure lets you work at any level, depending on your needs.
+
+* In this page:
+ * [Examples](../../client-api/commands/overview.mdx#examples)
+ * [Available commands](../../client-api/commands/overview.mdx#available-commands)
+ * [Syntax](../../client-api/commands/overview.mdx#syntax)
+
+
+## Examples
+
+#### GetDocumentsCommand
+
+
+
+{`try (IDocumentSession session = store.openSession()) \{
+ GetDocumentsCommand command = new GetDocumentsCommand("orders/1-A", null, false);
+ session.advanced().getRequestExecutor().execute(command);
+ ObjectNode order = (ObjectNode) command.getResult().getResults().get(0);
+\}
+`}
+
+
+
+#### DeleteDocumentCommand
+
+
+
+{`try (IDocumentSession session = store.openSession()) \{
+ DeleteDocumentCommand command = new DeleteDocumentCommand("employees/1-A", null);
+ session.advanced().getRequestExecutor().execute(command);
+\}
+`}
+
+
+
+
+
+## Available commands
+
+* **The Following low-level commands are available**:
+ * ConditionalGetDocumentsCommand
+ * CreateSubscriptionCommand
+ * [DeleteDocumentCommand](../../client-api/commands/documents/delete.mdx)
+ * DeleteSubscriptionCommand
+ * DropSubscriptionConnectionCommand
+ * ExplainQueryCommand
+ * GetClusterTopologyCommand
+ * GetConflictsCommand
+ * GetDatabaseTopologyCommand
+ * [GetDocumentsCommand](../../client-api/commands/documents/get.mdx)
+ * GetIdentitiesCommand
+ * GetNextOperationIdCommand
+ * GetNodeInfoCommand
+ * GetOperationStateCommand
+ * GetRawStreamResultCommand
+ * GetRevisionsBinEntryCommand
+ * GetRevisionsCommand
+ * GetSubscriptionsCommand
+ * GetSubscriptionStateCommand
+ * GetTcpInfoCommand
+ * GetTrafficWatchConfigurationCommand
+ * HeadAttachmentCommand
+ * HeadDocumentCommand
+ * HiLoReturnCommand
+ * IsDatabaseLoadedCommand
+ * KillOperationCommand
+ * MultiGetCommand
+ * NextHiLoCommand
+ * NextIdentityForCommand
+ * [PutDocumentCommand](../../client-api/commands/documents/put.mdx)
+ * PutSecretKeyCommand
+ * QueryCommand
+ * QueryStreamCommand
+ * SeedIdentityForCommand
+ * [SingleNodeBatchCommand](../../client-api/commands/batches/how-to-send-multiple-commands-using-a-batch.mdx)
+ * WaitForRaftIndexCommand
+
+
+
+## Syntax
+
+
+
+{`public void execute(RavenCommand command);
+
+public void execute(RavenCommand command, SessionInfo sessionInfo);
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/commands/_overview-nodejs.mdx b/versioned_docs/version-7.1/client-api/commands/_overview-nodejs.mdx
new file mode 100644
index 0000000000..9dc3adc380
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/commands/_overview-nodejs.mdx
@@ -0,0 +1,154 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* RavenDB's Client API is structured in layers.
+ At the highest layer, you interact with the [document store](../../client-api/what-is-a-document-store.mdx) and the [document session](../../client-api/session/what-is-a-session-and-how-does-it-work.mdx),
+ which handle most common database tasks like loading, saving, and querying documents.
+
+* Beneath this high-level interface are Operations and Commands:
+
+ * **Operations**:
+
+ * Operations provide management functionality outside the session's context,
+ like creating a database, performing bulk actions, or managing server-wide configurations.
+
+ * Learn more about Operations in [what are Operations](../../client-api/operations/what-are-operations.mdx).
+
+ * **Commands**:
+
+ * All high-level methods and Operations are built on top of Commands.
+ Commands form the lowest-level operations that directly communicate with the server.
+
+ * For example, a session’s _Load_ method translates internally to a _LoadOperation_,
+ which ultimately relies on a _GetDocumentsCommand_ to fetch data from the server.
+
+ * Commands are responsible for sending the appropriate request to the server using a `Request Executor`,
+ and parsing the server's response.
+
+ * All commands can be executed using either the [Store's _Request Executor_](../../client-api/commands/overview.mdx#execute-command---using-the-store)
+ or the [Session's _Request Executor_](../../client-api/commands/overview.mdx#execute-command---using-the-session),
+ regardless of whether the command is session-related or not.
+
+* This layered structure lets you work at any level, depending on your needs.
+
+* In this page:
+ * [Execute command - using the Store Request Executor](../../client-api/commands/overview.mdx#execute-command---using-the-store-request-executor)
+ * [Execute command - using the Session Request Executor](../../client-api/commands/overview.mdx#execute-command---using-the-session-request-executor)
+ * [Available commands](../../client-api/commands/overview.mdx#available-commands)
+ * [Syntax](../../client-api/commands/overview.mdx#syntax)
+
+
+## Execute command - using the Store Request Executor
+
+This example shows how to execute the low-level `CreateSubscriptionCommand` via the **Store**.
+(For examples of creating a subscription using higher-level methods, see [subscription creation examples](../../client-api/data-subscriptions/creation/examples.mdx)).
+
+
+
+{`// Define a command
+const cmd = new CreateSubscriptionCommand(\{
+ name: "Orders subscription",
+ query: "from Orders"
+\});
+
+// Call 'execute' on the store's Request Executor to run the command on the server
+// Pass the command
+await documentStore.getRequestExecutor().execute(cmd);
+`}
+
+
+
+
+
+## Execute command - using the Session Request Executor
+
+This example shows how to execute the low-level `GetDocumentsCommand` via the **Session**.
+(For loading a document using higher-level methods, see [loading entities](../../client-api/session/loading-entities.mdx)).
+
+
+
+{`const session = documentStore.openSession();
+
+// Define a command
+const cmd = new GetDocumentsCommand(
+ \{ conventions: documentStore.conventions, id: "orders/1-A" \});
+
+// Call 'execute' on the session's Request Executor to run the command on the server
+// Pass the command
+await session.advanced.requestExecutor.execute(cmd);
+
+// Access the results
+const order = command.result.results[0];
+const orderedAt = order.OrderedAt;
+`}
+
+
+
+* Note that the transaction created for the HTTP request when executing the command
+ is separate from the transaction initiated by the session's [SaveChanges](../../client-api/session/saving-changes.mdx) method,
+ even if both are called within the same code block.
+
+* Learn more about transactions in RavenDB in [Transaction support](../../client-api/faq/transaction-support.mdx).
+
+
+
+## Available commands
+
+* **The Following low-level commands are available**:
+ * ConditionalGetDocumentsCommand
+ * CreateSubscriptionCommand
+ * [DeleteDocumentCommand](../../client-api/commands/documents/delete.mdx)
+ * DeleteSubscriptionCommand
+ * DropSubscriptionConnectionCommand
+ * ExplainQueryCommand
+ * GetClusterTopologyCommand
+ * GetConflictsCommand
+ * GetDatabaseTopologyCommand
+ * [GetDocumentsCommand](../../client-api/commands/documents/get.mdx)
+ * GetIdentitiesCommand
+ * GetNextOperationIdCommand
+ * GetNodeInfoCommand
+ * GetOperationStateCommand
+ * GetRawStreamResultCommand
+ * GetRevisionsBinEntryCommand
+ * GetRevisionsCommand
+ * GetSubscriptionsCommand
+ * GetSubscriptionStateCommand
+ * GetTcpInfoCommand
+ * GetTrafficWatchConfigurationCommand
+ * HeadAttachmentCommand
+ * HeadDocumentCommand
+ * HiLoReturnCommand
+ * IsDatabaseLoadedCommand
+ * KillOperationCommand
+ * MultiGetCommand
+ * NextHiLoCommand
+ * NextIdentityForCommand
+ * [PutDocumentCommand](../../client-api/commands/documents/put.mdx)
+ * PutSecretKeyCommand
+ * QueryCommand
+ * QueryStreamCommand
+ * SeedIdentityForCommand
+ * [SingleNodeBatchCommand](../../client-api/commands/batches/how-to-send-multiple-commands-using-a-batch.mdx)
+ * WaitForRaftIndexCommand
+
+
+
+## Syntax
+
+
+
+{`execute(command);
+execute(command, sessionInfo);
+execute(command, sessionInfo, executeOptions);
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/commands/batches/_category_.json b/versioned_docs/version-7.1/client-api/commands/batches/_category_.json
new file mode 100644
index 0000000000..c638dbab0d
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/commands/batches/_category_.json
@@ -0,0 +1,4 @@
+{
+ "position": 2,
+ "label": Batching Commands,
+}
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/commands/batches/_how-to-send-multiple-commands-using-a-batch-csharp.mdx b/versioned_docs/version-7.1/client-api/commands/batches/_how-to-send-multiple-commands-using-a-batch-csharp.mdx
new file mode 100644
index 0000000000..e92e999df4
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/commands/batches/_how-to-send-multiple-commands-using-a-batch-csharp.mdx
@@ -0,0 +1,333 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use the low-level `SingleNodeBatchCommand` to send **multiple commands** in a **single request** to the server.
+ This reduces the number of remote calls and allows several operations to share the same transaction.
+
+* All the commands sent in the batch are executed as a **single transaction** on the node the client communicated with.
+ If any command fails, the entire batch is rolled back, ensuring data integrity.
+
+* The commands are replicated to other nodes in the cluster only AFTER the transaction is successfully completed on that node.
+
+* In this page:
+ * [Examples](../../../client-api/commands/batches/how-to-send-multiple-commands-using-a-batch.mdx#examples)
+ * [Available batch commands](../../../client-api/commands/batches/how-to-send-multiple-commands-using-a-batch.mdx#available-batch-commands)
+ * [Syntax](../../../client-api/commands/batches/how-to-send-multiple-commands-using-a-batch.mdx#syntax)
+
+
+## Examples
+
+
+
+#### Send multiple commands - using the Store's request executor:
+
+
+
+{`using (var store = new DocumentStore())
+using (store.GetRequestExecutor()
+ .ContextPool.AllocateOperationContext(out var storeContext))
+{
+ // Define the list of batch commands to execute
+ var commands = new List
+ {
+ new PutCommandData("employees/999", null, new DynamicJsonValue
+ {
+ ["FirstName"] = "James",
+ ["@metadata"] = new DynamicJsonValue
+ {
+ ["@collection"] = "employees"
+ }
+ }),
+
+ new PatchCommandData("employees/2-A", null, new PatchRequest
+ {
+ Script = "this.HomePhone = 'New phone number';"
+ }, null),
+
+ new DeleteCommandData("employees/3-A", null)
+ };
+
+ // Define the SingleNodeBatchCommand command
+ var batchCommand = new SingleNodeBatchCommand(store.Conventions, commands);
+
+ // Execute the batch command,
+ // all the 3 commands defined in the list will be executed in a single transaction
+ store.GetRequestExecutor().Execute(batchCommand, storeContext);
+
+ // Can access the batch command results:
+ var commandResults = batchCommand.Result.Results;
+ Assert.Equal(3, commandResults.Length);
+
+ var blittable = (BlittableJsonReaderObject)commandResults[0];
+
+ blittable.TryGetMember("Type", out var commandType);
+ Assert.Equal("PUT", commandType.ToString());
+
+ blittable.TryGetMember("@id", out var documentId);
+ Assert.Equal("employees/999", documentId.ToString());
+}
+`}
+
+
+
+
+{`using (var store = new DocumentStore())
+using (store.GetRequestExecutor()
+ .ContextPool.AllocateOperationContext(out var storeContext))
+{
+ // Define the list of batch commands to execute
+ var commands = new List
+ {
+ new PutCommandData("employees/999", null, new DynamicJsonValue
+ {
+ ["FirstName"] = "James",
+ ["@metadata"] = new DynamicJsonValue
+ {
+ ["@collection"] = "employees"
+ }
+ }),
+
+ new PatchCommandData("employees/2-A", null, new PatchRequest
+ {
+ Script = "this.HomePhone = 'New phone number';"
+ }, null),
+
+ new DeleteCommandData("employees/3-A", null)
+ };
+
+ // Define the SingleNodeBatchCommand command
+ var batchCommand = new SingleNodeBatchCommand(store.Conventions,
+ commands);
+
+ // Execute the batch command,
+ // all the 3 commands defined in the list will be executed in a single transaction
+ await store.GetRequestExecutor().ExecuteAsync(batchCommand, storeContext);
+
+ // Can access the batch command results:
+ var commandResults = batchCommand.Result.Results;
+ Assert.Equal(3, commandResults.Length);
+
+ var blittable = (BlittableJsonReaderObject)commandResults[0];
+
+ blittable.TryGetMember("Type", out var commandType);
+ Assert.Equal("PUT", commandType.ToString());
+
+ blittable.TryGetMember("@id", out var documentId);
+ Assert.Equal("employees/999", documentId.ToString());
+}
+`}
+
+
+
+
+
+
+
+#### Send multiple commands - using the Session's request executor:
+* `SingleNodeBatchCommand` can also be executed using the session's request executor.
+
+* Note that the transaction created for the HTTP request when executing `SingleNodeBatchCommand`
+ is separate from the transaction initiated by the session's [SaveChanges](../../../client-api/session/saving-changes.mdx) method, even if both are called within the same code block.
+ Learn more about transactions in RavenDB in [Transaction support](../../../client-api/faq/transaction-support.mdx).
+
+
+
+
+{`using (var session = store.OpenSession())
+{
+ // Define the list of batch commands to execute
+ var commands = new List
+ {
+ new PutCommandData("employees/999", null, new DynamicJsonValue
+ {
+ ["FirstName"] = "James",
+ ["@metadata"] = new DynamicJsonValue
+ {
+ ["@collection"] = "employees"
+ }
+ }),
+
+ new PatchCommandData("employees/2-A", null, new PatchRequest
+ {
+ Script = "this.HomePhone = 'New phone number';"
+ }, null),
+
+ new DeleteCommandData("employees/3-A", null)
+ };
+
+ // Define the SingleNodeBatchCommand command
+ var batchCommand = new SingleNodeBatchCommand(store.Conventions,
+ commands);
+
+ // Execute the batch command,
+ // all the 3 commands defined in the list will be executed in a single transaction
+ session.Advanced.RequestExecutor.Execute(batchCommand, session.Advanced.Context);
+
+ // Can access the batch command results:
+ var commandResults = batchCommand.Result.Results;
+ Assert.Equal(3, commandResults.Length);
+
+ var blittable = (BlittableJsonReaderObject)commandResults[0];
+
+ blittable.TryGetMember("Type", out var commandType);
+ Assert.Equal("PUT", commandType.ToString());
+
+ blittable.TryGetMember("@id", out var documentId);
+ Assert.Equal("employees/999", documentId.ToString());
+}
+`}
+
+
+
+
+{`using (var session = store.OpenAsyncSession())
+{
+ // Define the list of batch commands to execute
+ var commands = new List
+ {
+ new PutCommandData("employees/999", null, new DynamicJsonValue
+ {
+ ["FirstName"] = "James",
+ ["@metadata"] = new DynamicJsonValue
+ {
+ ["@collection"] = "employees"
+ }
+ }),
+
+ new PatchCommandData("employees/2-A", null, new PatchRequest
+ {
+ Script = "this.HomePhone = 'New phone number';"
+ }, null),
+
+ new DeleteCommandData("employees/3-A", null)
+ };
+
+ // Define the SingleNodeBatchCommand command
+ var batchCommand = new SingleNodeBatchCommand(store.Conventions,
+ commands);
+
+ // Execute the batch command,
+ // all the 3 commands defined in the list will be executed in a single transaction
+ await session.Advanced.RequestExecutor.ExecuteAsync(
+ batchCommand, session.Advanced.Context);
+
+ // Can access the batch command results:
+ var commandResults = batchCommand.Result.Results;
+ Assert.Equal(3, commandResults.Length);
+
+ var blittable = (BlittableJsonReaderObject)commandResults[0];
+
+ blittable.TryGetMember("Type", out var commandType);
+ Assert.Equal("PUT", commandType.ToString());
+
+ blittable.TryGetMember("@id", out var documentId);
+ Assert.Equal("employees/999", documentId.ToString());
+}
+`}
+
+
+
+
+
+
+
+## Available batch commands
+
+**The following commands can be sent in a batch via `SingleNodeBatchCommand`**:
+(These commands implement the `ICommandData` interface).
+
+ * BatchPatchCommandData
+ * CopyAttachmentCommandData
+ * CountersBatchCommandData
+ * DeleteAttachmentCommandData
+ * DeleteCommandData
+ * DeleteCompareExchangeCommandData
+ * DeletePrefixedCommandData
+ * ForceRevisionCommandData
+ * IncrementalTimeSeriesBatchCommandData
+ * JsonPatchCommandData
+ * MoveAttachmentCommandData
+ * PatchCommandData
+ * PutAttachmentCommandData
+ * PutCommandData
+ * PutCompareExchangeCommandData
+ * TimeSeriesBatchCommandData
+
+
+
+## Syntax
+
+
+
+{`public SingleNodeBatchCommand(
+ DocumentConventions conventions,
+ IList commands,
+ BatchOptions options = null)
+`}
+
+
+
+
+{`public class BatchOptions
+\{
+ public TimeSpan? RequestTimeout \{ get; set; \}
+ public ReplicationBatchOptions ReplicationOptions \{ get; set; \}
+ public IndexBatchOptions IndexOptions \{ get; set; \}
+ public ShardedBatchOptions ShardedOptions \{ get; set; \}
+\}
+
+public class ReplicationBatchOptions
+\{
+ // If set to true,
+ // will wait for replication to be performed on at least a majority of DB instances.
+ public bool WaitForReplicas \{ get; set; \}
+
+ public int NumberOfReplicasToWaitFor \{ get; set; \}
+ public TimeSpan WaitForReplicasTimeout \{ get; set; \}
+ public bool Majority \{ get; set; \}
+ public bool ThrowOnTimeoutInWaitForReplicas \{ get; set; \}
+\}
+
+public sealed class IndexBatchOptions
+\{
+ public bool WaitForIndexes \{ get; set; \}
+ public TimeSpan WaitForIndexesTimeout \{ get; set; \}
+ public bool ThrowOnTimeoutInWaitForIndexes \{ get; set; \}
+ public string[] WaitForSpecificIndexes \{ get; set; \}
+\}
+
+public class ShardedBatchOptions
+\{
+ public ShardedBatchBehavior BatchBehavior \{ get; set; \}
+\}
+`}
+
+
+
+
+{`// Executing \`SingleNodeBatchCommand\` returns the following object:
+// ================================================================
+
+public class BatchCommandResult
+\{
+ public BlittableJsonReaderArray Results \{ get; set; \}
+ public long? TransactionIndex \{ get; set; \}
+\}
+
+public sealed class BlittableArrayResult
+\{
+ public BlittableJsonReaderArray Results \{ get; set; \}
+ public long TotalResults \{ get; set; \}
+ public string ContinuationToken \{ get; set; \}
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/commands/batches/_how-to-send-multiple-commands-using-a-batch-java.mdx b/versioned_docs/version-7.1/client-api/commands/batches/_how-to-send-multiple-commands-using-a-batch-java.mdx
new file mode 100644
index 0000000000..8a2c048c6f
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/commands/batches/_how-to-send-multiple-commands-using-a-batch-java.mdx
@@ -0,0 +1,75 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+To send **multiple commands** in a **single request**, reducing the number of remote calls and allowing several operations to share **same transaction**, `BatchCommand` should be used.
+
+## Syntax
+
+
+
+{`public BatchCommand(DocumentConventions conventions, List commands, BatchOptions options)
+`}
+
+
+
+### The following commands can be sent using a batch
+
+* DeleteCommandData
+* DeletePrefixedCommandData
+* PutCommandData
+* PatchCommandData
+* DeleteAttachmentCommandData
+* PutAttachmentCommandData
+
+### Batch Options
+
+
+
+{`public class BatchOptions \{
+ private boolean waitForReplicas;
+ private int numberOfReplicasToWaitFor;
+ private Duration waitForReplicasTimeout;
+ private boolean majority;
+ private boolean throwOnTimeoutInWaitForReplicas;
+
+ private boolean waitForIndexes;
+ private Duration waitForIndexesTimeout;
+ private boolean throwOnTimeoutInWaitForIndexes;
+ private String[] waitForSpecificIndexes;
+
+ // getters and setters
+\}
+`}
+
+
+
+
+## Example
+
+
+
+{`try (IDocumentSession session = documentStore.openSession()) \{
+
+ ObjectNode user3 = mapper.createObjectNode();
+ user3.put("Name", "James");
+
+ PutCommandDataWithJson user3Cmd = new PutCommandDataWithJson("users/3", null, user3);
+
+ DeleteCommandData deleteCmd = new DeleteCommandData("users/2-A", null);
+ List commands = Arrays.asList(user3Cmd, deleteCmd);
+
+ BatchCommand batch = new BatchCommand(documentStore.getConventions(), commands);
+ session.advanced().getRequestExecutor().execute(batch);
+
+\}
+`}
+
+
+
+
+All the commands in the batch will succeed or fail as a **transaction**. Other users will not be able to see any of the changes until the entire batch completes.
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/commands/batches/_how-to-send-multiple-commands-using-a-batch-nodejs.mdx b/versioned_docs/version-7.1/client-api/commands/batches/_how-to-send-multiple-commands-using-a-batch-nodejs.mdx
new file mode 100644
index 0000000000..986fd6ac70
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/commands/batches/_how-to-send-multiple-commands-using-a-batch-nodejs.mdx
@@ -0,0 +1,192 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use the low-level `SingleNodeBatchCommand` to send **multiple commands** in a **single request** to the server.
+ This reduces the number of remote calls and allows several operations to share the same transaction.
+
+* All the commands sent in the batch are executed as a **single transaction** on the node the client communicated with.
+ If any command fails, the entire batch is rolled back, ensuring data integrity.
+
+* The commands are replicated to other nodes in the cluster only AFTER the transaction is successfully completed on that node.
+
+* In this page:
+ * [Examples](../../../client-api/commands/batches/how-to-send-multiple-commands-using-a-batch.mdx#examples)
+ * [Available batch commands](../../../client-api/commands/batches/how-to-send-multiple-commands-using-a-batch.mdx#available-batch-commands)
+ * [Syntax](../../../client-api/commands/batches/how-to-send-multiple-commands-using-a-batch.mdx#syntax)
+
+
+## Examples
+
+
+
+#### Send multiple commands - using the Store's request executor:
+
+
+{`// This patch request will be used in the following 'PatchCommandData' command
+let patchRequest = new PatchRequest();
+patchRequest.script = "this.HomePhone = 'New phone number'";
+
+// Define the list of batch commands to execute
+const commands = [
+ new PutCommandDataBase("employees/999", null, null, \{
+ FirstName: "James",
+ "@metadata": \{
+ "@collection": "employees"
+ \}
+ \}),
+
+ new PatchCommandData("employees/2-A", null, patchRequest),
+
+ new DeleteCommandData("employees/3-A", null)
+];
+
+// Define the 'SingleNodeBatchCommand' command
+const batchCommand = new SingleNodeBatchCommand(documentStore.conventions, commands);
+
+// Execute the batch command,
+// all the 3 commands defined in the list will be executed in a single transaction
+await documentStore.getRequestExecutor().execute(batchCommand);
+
+// Can access the batch command results
+const commandResults = batchCommand.result.results;
+assert.equal(commandResults.length, 3);
+assert.equal(commandResults[0].type, "PUT");
+assert.equal(commandResults[0]["@id"], "employees/999");
+`}
+
+
+
+
+
+
+#### Send multiple commands - using the Session's request executor:
+* `SingleNodeBatchCommand` can also be executed using the session's request executor.
+
+* Note that the transaction created for the HTTP request when executing `SingleNodeBatchCommand`
+ is separate from the transaction initiated by the session's [saveChanges](../../../client-api/session/saving-changes.mdx) method, even if both are called within the same code block.
+ Learn more about transactions in RavenDB in [Transaction support](../../../client-api/faq/transaction-support.mdx).
+
+
+
+{`const session = documentStore.openSession();
+
+// This patch request will be used in the following 'PatchCommandData' command
+let patchRequest = new PatchRequest();
+patchRequest.script = "this.HomePhone = 'New phone number'";
+
+// Define the list of batch commands to execute
+const commands = [
+ new PutCommandDataBase("employees/999", null, null, \{
+ FirstName: "James",
+ "@metadata": \{
+ "@collection": "employees"
+ \}
+ \}),
+
+ new PatchCommandData("employees/2-A", null, patchRequest),
+
+ new DeleteCommandData("employees/3-A", null)
+];
+
+// Define the 'SingleNodeBatchCommand' command
+const batchCommand = new SingleNodeBatchCommand(documentStore.conventions, commands);
+
+// Execute the batch command,
+// all the 3 commands defined in the list will be executed in a single transaction
+await session.advanced.requestExecutor.execute(batchCommand);
+
+// Can access the batch command results
+const commandResults = batchCommand.result.results;
+assert.equal(commandResults.length, 3);
+assert.equal(commandResults[0].type, "PUT");
+assert.equal(commandResults[0]["@id"], "employees/999");
+`}
+
+
+
+
+
+
+## Available batch commands
+
+* **The following commands can be sent in a batch via `SingleNodeBatchCommand`**:
+
+ * BatchPatchCommandData
+ * CopyAttachmentCommandData
+ * CountersBatchCommandData
+ * DeleteAttachmentCommandData
+ * DeleteCommandData
+ * DeleteCompareExchangeCommandData
+ * DeletePrefixedCommandData
+ * ForceRevisionCommandData
+ * IncrementalTimeSeriesBatchCommandData
+ * JsonPatchCommandData
+ * MoveAttachmentCommandData
+ * PatchCommandData
+ * PutAttachmentCommandData
+ * PutCommandData
+ * PutCompareExchangeCommandData
+ * TimeSeriesBatchCommandData
+
+
+
+## Syntax
+
+
+
+{`SingleNodeBatchCommand(conventions, commands);
+SingleNodeBatchCommand(conventions, commands, batchOptions);
+`}
+
+
+
+
+{`// The batchOptions object:
+\{
+ replicationOptions; // ReplicationBatchOptions
+ indexOptions; // IndexBatchOptions
+ shardedOptions; // ShardedBatchOptions
+\}
+
+// The ReplicationBatchOptions object:
+\{
+ timeout?; // number
+ throwOnTimeout?; // boolean
+ replicas?; // number
+ majority?; // boolean
+\}
+
+// The IndexBatchOptions object:
+\{
+ timeout?; // number
+ throwOnTimeout?; // boolean
+ indexes?; // string[]
+\}
+
+// The ShardedBatchOptions object:
+\{
+ batchBehavior; // ShardedBatchBehavior
+\}
+`}
+
+
+
+
+{`// Executing \`SingleNodeBatchCommand\` returns the following object:
+// ================================================================
+
+class BatchCommandResult \{
+ results; // any[]
+ transactionIndex; // number
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/commands/batches/how-to-send-multiple-commands-using-a-batch.mdx b/versioned_docs/version-7.1/client-api/commands/batches/how-to-send-multiple-commands-using-a-batch.mdx
new file mode 100644
index 0000000000..5faa99697f
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/commands/batches/how-to-send-multiple-commands-using-a-batch.mdx
@@ -0,0 +1,47 @@
+---
+title: "Send Multiple Commands in a Batch"
+hide_table_of_contents: true
+sidebar_label: Send Multiple Commands
+sidebar_position: 0
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import HowToSendMultipleCommandsUsingABatchCsharp from './_how-to-send-multiple-commands-using-a-batch-csharp.mdx';
+import HowToSendMultipleCommandsUsingABatchJava from './_how-to-send-multiple-commands-using-a-batch-java.mdx';
+import HowToSendMultipleCommandsUsingABatchNodejs from './_how-to-send-multiple-commands-using-a-batch-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/commands/documents/_category_.json b/versioned_docs/version-7.1/client-api/commands/documents/_category_.json
new file mode 100644
index 0000000000..65668e24f1
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/commands/documents/_category_.json
@@ -0,0 +1,4 @@
+{
+ "position": 1,
+ "label": Document Commands,
+}
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/commands/documents/_delete-csharp.mdx b/versioned_docs/version-7.1/client-api/commands/documents/_delete-csharp.mdx
new file mode 100644
index 0000000000..295eacedb8
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/commands/documents/_delete-csharp.mdx
@@ -0,0 +1,149 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use the low-level `DeleteDocumentCommand` to remove a document from the database.
+
+* To delete a document using a higher-level method, see [deleting entities](../../../client-api/session/deleting-entities.mdx).
+
+* In this page:
+
+ * [Examples](../../../client-api/commands/documents/delete.mdx#examples)
+ * [Syntax](../../../client-api/commands/documents/delete.mdx#syntax)
+
+
+## Examples
+
+
+
+**Delete document command - using the Store's request executor**:
+
+
+
+{`using (var store = new DocumentStore())
+using (store.GetRequestExecutor().ContextPool.AllocateOperationContext(out var context))
+{
+ var command = new DeleteDocumentCommand("employees/1-A", null);
+ store.GetRequestExecutor().Execute(command, context);
+}
+`}
+
+
+
+
+{`using (var store = new DocumentStore())
+using (store.GetRequestExecutor().ContextPool.AllocateOperationContext(out var context))
+{
+ var command = new DeleteDocumentCommand("employees/1-A", null);
+ await store.GetRequestExecutor().ExecuteAsync(command, context);
+}
+`}
+
+
+
+
+
+
+
+**Delete document command - using the Session's request executor**:
+
+
+
+{`var command = new DeleteDocumentCommand("employees/1-A", null);
+session.Advanced.RequestExecutor.Execute(command, session.Advanced.Context);
+`}
+
+
+
+
+{`var command = new DeleteDocumentCommand("employees/1-A", null);
+await asyncSession.Advanced.RequestExecutor.ExecuteAsync(command, asyncSession.Advanced.Context);
+`}
+
+
+
+
+
+
+
+**Delete document command - with concurrency check**:
+
+
+
+{`// Load a document
+var employeeDocument = session.Load("employees/2-A");
+var cv = session.Advanced.GetChangeVectorFor(employeeDocument);
+
+// Modify the document content and save changes
+// The change-vector of the stored document will change
+employeeDocument.Title = "Some new title";
+session.SaveChanges();
+
+try
+{
+ // Try to delete the document with the previous change-vector
+ var command = new DeleteDocumentCommand("employees/2-A", cv);
+ session.Advanced.RequestExecutor.Execute(command, session.Advanced.Context);
+}
+catch (Exception e)
+{
+ // A concurrency exception is thrown
+ // since the change-vector of the document in the database
+ // does not match the change-vector specified in the delete command
+ Assert.IsType(e);
+}
+`}
+
+
+
+
+{`// Load a document
+var employeeDocument = await asyncSession.LoadAsync("employees/2-A");
+var cv = asyncSession.Advanced.GetChangeVectorFor(employeeDocument);
+
+// Modify the document content and save changes
+// The change-vector of the stored document will change
+employeeDocument.Title = "Some new title";
+asyncSession.SaveChangesAsync();
+
+try
+{
+ // Try to delete the document with the previous change-vector
+ var command = new DeleteDocumentCommand("employees/2-A", cv);
+ await asyncSession.Advanced.RequestExecutor.ExecuteAsync(command, asyncSession.Advanced.Context);
+}
+catch (Exception e)
+{
+ // A concurrency exception is thrown
+ // since the change-vector of the document in the database
+ // does not match the change-vector specified in the delete command
+ Assert.IsType(e);
+}
+`}
+
+
+
+
+
+
+
+## Syntax
+
+
+
+{`public DeleteDocumentCommand(string id, string changeVector)
+`}
+
+
+
+| Parameter | Type | Description |
+|------------------|----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **id** | `string` | The ID of the document to delete. |
+| **changeVector** | `string` | The change-vector of the document you wish to delete, used for [optimistic concurrency control](../../../server/clustering/replication/change-vector.mdx#concurrency-control--change-vectors). Pass `null` to skip the check and force the deletion. |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/commands/documents/_delete-java.mdx b/versioned_docs/version-7.1/client-api/commands/documents/_delete-java.mdx
new file mode 100644
index 0000000000..b71810c017
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/commands/documents/_delete-java.mdx
@@ -0,0 +1,32 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+**Delete** is used to remove a document from a database.
+
+## Syntax
+
+
+
+{`public DeleteDocumentCommand(String id, String changeVector)
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **id** | `String` | ID of a document to be deleted |
+| **changeVector** | `String` | Entity Change Vector, used for concurrency checks (`null` to skip check) |
+
+## Example
+
+
+
+{`DeleteDocumentCommand command = new DeleteDocumentCommand("employees/1-A", null);
+session.advanced().getRequestExecutor().execute(command);
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/commands/documents/_delete-nodejs.mdx b/versioned_docs/version-7.1/client-api/commands/documents/_delete-nodejs.mdx
new file mode 100644
index 0000000000..6bdbf75f57
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/commands/documents/_delete-nodejs.mdx
@@ -0,0 +1,98 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use the low-level `DeleteDocumentCommand` to remove a document from the database.
+
+* To delete a document using a higher-level method, see [deleting entities](../../../client-api/session/deleting-entities.mdx).
+
+* In this page:
+
+ * [Examples](../../../client-api/commands/documents/delete.mdx#examples)
+ * [Syntax](../../../client-api/commands/documents/delete.mdx#syntax)
+
+
+## Examples
+
+
+
+**Delete document command - using the Store's request executor**:
+
+
+{`// Define the Delete Command
+// Pass the document ID & whether to make a concurrency check
+const command = new DeleteDocumentCommand("employees/1-A", null);
+
+// Send the command to the server using the Store's Request Executor
+await documentStore.getRequestExecutor().execute(command);
+`}
+
+
+
+
+
+
+**Delete document command - using the Session's request executor**:
+
+
+{`const command = new DeleteDocumentCommand("employees/1-A", null);
+
+// Send the command to the server using the Session's Request Executor
+await session.advanced.requestExecutor.execute(command);
+`}
+
+
+
+
+
+
+**Delete document command - with concurrency check**:
+
+
+{`// Load a document
+const employeeDocument = await session.load('employees/2-A');
+const cv = session.advanced.getChangeVectorFor(employeeDocument);
+
+// Modify the document content and save changes
+// The change-vector of the stored document will change
+employeeDocument.Title = "Some new title";
+await session.saveChanges();
+
+try \{
+ // Try to delete the document with the previous change-vector
+ const command = new DeleteDocumentCommand("employees/2-A", cv);
+ await session.advanced.requestExecutor.execute(command);
+\}
+catch (err) \{
+ // A concurrency exception is thrown
+ // since the change-vector of the document in the database
+ // does not match the change-vector specified in the delete command
+ assert.equal(err.name, "ConcurrencyException");
+\}
+`}
+
+
+
+
+
+
+## Syntax
+
+
+
+{`DeleteDocumentCommand(id, changeVector);
+`}
+
+
+
+| Parameter | Type | Description |
+|------------------|----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **id** | `string` | The ID of the document to delete. |
+| **changeVector** | `string` | The change-vector of the document you wish to delete, used for [optimistic concurrency control](../../../server/clustering/replication/change-vector.mdx#concurrency-control--change-vectors). Pass `null` to skip the check and force the deletion. |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/commands/documents/_delete-php.mdx b/versioned_docs/version-7.1/client-api/commands/documents/_delete-php.mdx
new file mode 100644
index 0000000000..1665aa3ea5
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/commands/documents/_delete-php.mdx
@@ -0,0 +1,45 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `DeleteDocumentCommand` to remove a document from the database.
+
+* In this page:
+
+ * [Example](../../../client-api/commands/documents/delete.mdx#example)
+ * [Syntax](../../../client-api/commands/documents/delete.mdx#syntax)
+
+
+## Example
+
+
+
+{`$command = new DeleteDocumentCommand("employees/1-A", null);
+$session->advanced()->getRequestExecutor()->execute($command);
+`}
+
+
+
+
+
+
+## Syntax
+
+
+
+{`DeleteDocumentCommand(?string $idOrCopy, ?string $changeVector = null);
+`}
+
+
+
+| Parameters | Type | Description |
+|------------|------|-------------|
+| **idOrCopy** | `string` | ID of a document to be deleted |
+| **changeVector** | `string` (optional) | Entity Change Vector, used for concurrency checks (`None` to skip check) |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/commands/documents/_delete-python.mdx b/versioned_docs/version-7.1/client-api/commands/documents/_delete-python.mdx
new file mode 100644
index 0000000000..bcb7560c07
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/commands/documents/_delete-python.mdx
@@ -0,0 +1,46 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `DeleteDocumentCommand` to remove a document from the database.
+
+* In this page:
+
+ * [Example](../../../client-api/commands/documents/delete.mdx#example)
+ * [Syntax](../../../client-api/commands/documents/delete.mdx#syntax)
+
+
+## Example
+
+
+
+{`command = DeleteDocumentCommand("employees/1-A", None)
+session.advanced.request_executor.execute_command(command)
+`}
+
+
+
+
+
+
+## Syntax
+
+
+
+{`class DeleteDocumentCommand(VoidRavenCommand):
+ def __init__(self, key: str, change_vector: Optional[str] = None): ...
+`}
+
+
+
+| Parameters | Type | Description |
+|------------|------|-------------|
+| **key** | `str` | ID of a document to be deleted |
+| **change_vector** | `str` (optional) | Entity Change Vector, used for concurrency checks (`None` to skip check) |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/commands/documents/_get-csharp.mdx b/versioned_docs/version-7.1/client-api/commands/documents/_get-csharp.mdx
new file mode 100644
index 0000000000..547e4e59ac
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/commands/documents/_get-csharp.mdx
@@ -0,0 +1,691 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use the low-level `GetDocumentsCommand` to retrieve documents from the database.
+
+* To retrieve documents using a higher-level method, see [loading entities](../../../client-api/session/loading-entities.mdx) or [query for documents](../../../client-api/session/querying/how-to-query.mdx).
+
+* In this page:
+ - [Get single document](../../../client-api/commands/documents/get.mdx#get-single-document)
+ - [Get multiple documents](../../../client-api/commands/documents/get.mdx#get-multiple-documents)
+ - [Get metadata only](../../../client-api/commands/documents/get.mdx#get-metadata-only)
+ - [Get paged documents](../../../client-api/commands/documents/get.mdx#get-paged-documents)
+ - [Get documents - by ID prefix](../../../client-api/commands/documents/get.mdx#get-documents---by-id-prefix)
+ - [Get documents - with includes](../../../client-api/commands/documents/get.mdx#get-documents---with-includes)
+ - [Syntax](../../../client-api/commands/documents/get.mdx#syntax)
+
+
+## Get single document
+
+* The following examples demonstrate how to retrieve a document using either the _Store's request executor_
+ or the _Session's request executor_.
+* The examples in the rest of the article use the _Store's request executor_, but you can apply the Session's implementation shown here to ALL cases.
+
+
+**Get document command - using the Store's request executor**:
+
+
+
+{`using (var store = new DocumentStore())
+using (store.GetRequestExecutor().ContextPool.AllocateOperationContext(out var context))
+{
+ // Define the 'GetDocumentsCommand'
+ var command = new GetDocumentsCommand(store.Conventions,
+ "orders/1-A", null, metadataOnly: false);
+
+ // Call 'Execute' on the Store's Request Executor to send the command to the server
+ store.GetRequestExecutor().Execute(command, context);
+
+ // Access the results
+ var blittable = (BlittableJsonReaderObject)command.Result.Results[0];
+
+ // Deserialize the blittable JSON into a strongly-typed 'Order' object
+ var order = (Order)store.Conventions.Serialization.DefaultConverter
+ .FromBlittable(typeof(Order), blittable);
+
+ var orderedAt = order.OrderedAt;
+}
+`}
+
+
+
+
+{`using (var store = new DocumentStore())
+using (store.GetRequestExecutor().ContextPool.AllocateOperationContext(out var context))
+{
+ // Define the 'GetDocumentsCommand'
+ var command = new GetDocumentsCommand(store.Conventions,
+ "orders/1-A", null, metadataOnly: false);
+
+ // Call 'ExecuteAsync' on the Store's Request Executor to send the command to the server
+ await store.GetRequestExecutor().ExecuteAsync(command, context);
+
+ // Access the results
+ var blittable = (BlittableJsonReaderObject)command.Result.Results[0];
+
+ // Deserialize the blittable JSON into a strongly-typed 'Order' object
+ var order = (Order)store.Conventions.Serialization.DefaultConverter
+ .FromBlittable(typeof(Order), blittable);
+
+ var orderedAt = order.OrderedAt;
+}
+`}
+
+
+
+
+
+
+
+**Get document command - using the Session's request executor**:
+
+
+
+{`using (var store = new DocumentStore())
+using (var session = store.OpenSession())
+{
+ // Define the 'GetDocumentsCommand'
+ var command = new GetDocumentsCommand(store.Conventions,
+ "orders/1-A", null, metadataOnly: false);
+
+ // Call 'Execute' on the Session's Request Executor to send the command to the server
+ session.Advanced.RequestExecutor.Execute(command, session.Advanced.Context);
+
+ // Access the results
+ var blittable = (BlittableJsonReaderObject)command.Result.Results[0];
+
+ // Deserialize the blittable JSON into a strongly-typed 'Order' object
+ // Setting the last param to 'true' will cause the session to track the 'Order' entity
+ var order = session.Advanced.JsonConverter.FromBlittable(ref blittable,
+ "orders/1-A", trackEntity: true);
+
+ var orderedAt = order.OrderedAt;
+}
+`}
+
+
+
+
+{`using (var store = new DocumentStore())
+using (var asyncSession = store.OpenAsyncSession())
+{
+ // Define the 'GetDocumentsCommand'
+ var command = new GetDocumentsCommand(store.Conventions,
+ "orders/1-A", null, metadataOnly: false);
+
+ // Call 'ExecuteAsync' on the Session's Request Executor to send the command to the server
+ await asyncSession.Advanced.RequestExecutor.ExecuteAsync(
+ command, asyncSession.Advanced.Context);
+
+ // Access the results
+ var blittable = (BlittableJsonReaderObject)command.Result.Results[0];
+
+ // Deserialize the blittable JSON into a strongly-typed 'Order' object
+ // Setting the last param to 'true' will cause the session to track the 'Order' entity
+ var order = asyncSession.Advanced.JsonConverter.FromBlittable(ref blittable,
+ "orders/1-A", trackEntity: true);
+
+ var orderedAt = order.OrderedAt;
+}
+`}
+
+
+
+
+
+
+
+## Get multiple documents
+
+
+
+**Get multiple documents**:
+
+
+{`using (var store = new DocumentStore())
+using (store.GetRequestExecutor().ContextPool.AllocateOperationContext(out var context))
+\{
+ // Pass a list of document IDs to the get command
+ var command = new GetDocumentsCommand(store.Conventions,
+ new[] \{ "orders/1-A", "employees/2-A", "products/1-A" \}, null, false);
+
+ store.GetRequestExecutor().Execute(command, context);
+
+ // Access results
+ var orderBlittable = (BlittableJsonReaderObject)command.Result.Results[0];
+ var orderDocument = (Order)store.Conventions.Serialization.DefaultConverter
+ .FromBlittable(typeof(Order), orderBlittable);
+
+ var employeeBlittable = (BlittableJsonReaderObject)command.Result.Results[1];
+ var employeeDocument = (Employee)store.Conventions.Serialization.DefaultConverter
+ .FromBlittable(typeof(Employee), orderBlittable);
+
+ var productBlittable = (BlittableJsonReaderObject)command.Result.Results[2];
+ var productDocument = (Product)store.Conventions.Serialization.DefaultConverter
+ .FromBlittable(typeof(Product), productBlittable);
+\}
+`}
+
+
+
+
+
+
+**Get multiple documents - missing documents**:
+
+
+{`using (var store = new DocumentStore())
+using (store.GetRequestExecutor().ContextPool.AllocateOperationContext(out var context))
+\{
+ // Assuming that employees/9999-A doesn't exist
+ var command = new GetDocumentsCommand(store.Conventions,
+ new[] \{ "orders/1-A", "employees/9999-A", "products/3-A" \}, null, false);
+
+ store.GetRequestExecutor().Execute(command, context);
+
+ // Results will contain 'null' for any missing document
+ var results = command.Result.Results; // orders/1-A, null, products/3-A
+ Assert.Null(results[1]);
+\}
+`}
+
+
+
+
+
+
+## Get metadata only
+
+
+
+{`using (var store = new DocumentStore())
+using (store.GetRequestExecutor().ContextPool.AllocateOperationContext(out var context))
+\{
+ // Pass 'true' in the 'metadataOnly' param to retrieve only the document METADATA
+ var command = new GetDocumentsCommand(store.Conventions,
+ "orders/1-A", null, metadataOnly: true);
+
+ store.GetRequestExecutor().Execute(command, context);
+
+ // Access results
+ var blittable = (BlittableJsonReaderObject)command.Result.Results[0];
+ var documentMetadata = (BlittableJsonReaderObject)blittable["@metadata"];
+
+ // Print out all metadata properties
+ foreach (var propertyName in documentMetadata.GetPropertyNames())
+ \{
+ documentMetadata.TryGet
+
+
+
+
+## Get paged documents
+
+* You can retrieve documents in pages by specifying how many documents to skip and how many to fetch.
+* Using this overload, no specific collection is specified, the documents will be fetched from ALL collections.
+
+
+
+{`using (var store = new DocumentStore())
+using (store.GetRequestExecutor().ContextPool.AllocateOperationContext(out var context))
+\{
+ // Specify the number of documents to skip (start)
+ // and the number of documents to get (pageSize)
+ var command = new GetDocumentsCommand(start: 0, pageSize: 128);
+
+ store.GetRequestExecutor().Execute(command, context);
+
+ // The documents are sorted by the last modified date,
+ // with the most recent modifications appearing first.
+ var firstDocs = command.Result.Results;
+\}
+`}
+
+
+
+
+
+## Get documents - by ID prefix
+
+
+
+**Retrieve documents that match a specified ID prefix**:
+
+
+{`using (var store = new DocumentStore())
+using (store.GetRequestExecutor().ContextPool.AllocateOperationContext(out var context))
+\{
+ // Return up to 50 documents with ID that starts with 'products/'
+ var command = new GetDocumentsCommand(store.Conventions,
+ startWith: "products/",
+ startAfter: null,
+ matches: null,
+ exclude: null,
+ start: 0,
+ pageSize: 50,
+ metadataOnly: false);
+
+ store.GetRequestExecutor().Execute(command, context);
+
+ // Access a Product document
+ var blittable = (BlittableJsonReaderObject)command.Result.Results[0];
+ var product = (Product)store.Conventions.Serialization.DefaultConverter
+ .FromBlittable(typeof(Product), blittable);
+\}
+`}
+
+
+
+
+
+
+**Retrieve documents that match a specified ID prefix - with "matches" pattern**:
+
+
+{`using (var store = new DocumentStore())
+using (store.GetRequestExecutor().ContextPool.AllocateOperationContext(out var context))
+\{
+ // Return up to 50 documents with IDs that start with 'orders/'
+ // and the rest of the ID either begins with '23',
+ // or contains any character at the 1st position and ends with '10-A'
+ // e.g. orders/234-A, orders/810-A
+ var command = new GetDocumentsCommand(store.Conventions,
+ startWith: "orders/",
+ startAfter: null,
+ matches: "23*|?10-A",
+ exclude: null,
+ start: 0,
+ pageSize: 50,
+ metadataOnly: false);
+
+ store.GetRequestExecutor().Execute(command, context);
+
+ // Access an Order document
+ var blittable = (BlittableJsonReaderObject)command.Result.Results[0];
+ var order = (Order)store.Conventions.Serialization.DefaultConverter
+ .FromBlittable(typeof(Order), blittable);
+
+ Assert.True(order.Id.StartsWith("orders/23") ||
+ Regex.IsMatch(order.Id, @"^orders/.\{1\}10-A$"));
+\}
+`}
+
+
+
+
+
+
+**Retrieve documents that match a specified ID prefix - with "exclude" pattern**:
+
+
+{`using (var store = new DocumentStore())
+using (store.GetRequestExecutor().ContextPool.AllocateOperationContext(out var context))
+\{
+ // Return up to 50 documents with IDs that start with 'orders/'
+ // and the rest of the ID excludes documents ending with '10-A',
+ // e.g. will return orders/820-A, but not orders/810-A
+ var command = new GetDocumentsCommand(store.Conventions,
+ startWith: "orders/",
+ startAfter: null,
+ matches: null,
+ exclude: "*10-A",
+ start: 0,
+ pageSize: 50,
+ metadataOnly: false);
+
+ store.GetRequestExecutor().Execute(command, context);
+
+ // Access an Order document
+ var blittable = (BlittableJsonReaderObject)command.Result.Results[0];
+ var order = (Order)store.Conventions.Serialization.DefaultConverter
+ .FromBlittable(typeof(Order), blittable);
+
+ Assert.True(order.Id.StartsWith("orders/") && !order.Id.EndsWith("10-A"));
+\}
+`}
+
+
+
+
+
+
+## Get documents - with includes
+
+
+
+**Include related documents**:
+
+
+{`using (var store = new DocumentStore())
+using (store.GetRequestExecutor().ContextPool.AllocateOperationContext(out var context))
+\{
+ // Fetch document products/77-A and include its related Supplier document
+ var command = new GetDocumentsCommand(store.Conventions,
+ id:"products/77-A",
+ includes: new[] \{ "Supplier" \},
+ metadataOnly: false);
+
+ store.GetRequestExecutor().Execute(command, context);
+
+ var productBlittable = (BlittableJsonReaderObject)command.Result.Results[0];
+ if (productBlittable.TryGet("Supplier", out var supplierId))
+ \{
+ // Access the related document that was included
+ var supplierBlittable =
+ (BlittableJsonReaderObject)command.Result.Includes[supplierId];
+
+ var supplier = (Supplier)store.Conventions.Serialization.DefaultConverter
+ .FromBlittable(typeof(Supplier), supplierBlittable);
+ \}
+\}
+`}
+
+
+
+
+
+
+**Include counters**:
+
+
+{`using (var store = new DocumentStore())
+using (store.GetRequestExecutor().ContextPool.AllocateOperationContext(out var context))
+\{
+ // Fetch document products/77-A and include the specified counters
+ var command = new GetDocumentsCommand(store.Conventions,
+ ids:new[] \{"products/77-A"\},
+ includes: null,
+ // Pass the names of the counters to include. In this example,
+ // the counter names in RavenDB's sample data are stars...
+ counterIncludes: new[] \{ "⭐", "⭐⭐" \},
+ timeSeriesIncludes: null,
+ compareExchangeValueIncludes: null,
+ metadataOnly: false);
+
+ store.GetRequestExecutor().Execute(command, context);
+
+ // Access the included counters results
+ var counters = (BlittableJsonReaderObject)command.Result.CounterIncludes;
+ var countersBlittableArray =
+ (BlittableJsonReaderArray)counters["products/77-A"];
+
+ var counter = (BlittableJsonReaderObject)countersBlittableArray[0];
+ var counterName = counter["CounterName"];
+ var counterValue = counter["TotalValue"];
+\}
+`}
+
+
+
+
+
+
+**Include time series**:
+
+
+{`using (var store = new DocumentStore())
+using (store.GetRequestExecutor().ContextPool.AllocateOperationContext(out var context))
+\{
+ // Fetch document employees/1-A and include the specified time series
+ var command = new GetDocumentsCommand(store.Conventions,
+ ids:new[] \{"employees/1-A"\},
+ includes: null,
+ counterIncludes: null,
+ // Specify the time series name and the time range
+ timeSeriesIncludes: new[] \{ new TimeSeriesRange
+ \{
+ Name = "HeartRates",
+ From = DateTime.MinValue,
+ To = DateTime.MaxValue
+ \} \},
+ compareExchangeValueIncludes:null,
+ metadataOnly: false);
+
+ store.GetRequestExecutor().Execute(command, context);
+
+ // Access the included time series results
+ var timeSeriesBlittable =
+ (BlittableJsonReaderObject)command.Result.TimeSeriesIncludes["employees/1-A"];
+
+ var timeSeriesBlittableArray =
+ (BlittableJsonReaderArray)timeSeriesBlittable["HeartRates"];
+
+ var ts = (BlittableJsonReaderObject)timeSeriesBlittableArray[0];
+ var entries = (BlittableJsonReaderArray)ts["Entries"];
+
+ var tsEntry = (BlittableJsonReaderObject)entries[0];
+ var entryTimeStamp = tsEntry["Timestamp"];
+ var entryValues = tsEntry["Values"];
+\}
+`}
+
+
+
+
+
+
+**Include revisions**:
+
+
+{`using (var store = new DocumentStore())
+using (store.GetRequestExecutor().ContextPool.AllocateOperationContext(out var context))
+\{
+ // Fetch document orders/826-A and include the specified revisions
+ var command = new GetDocumentsCommand(store.Conventions,
+ ids:new[] \{"orders/826-A"\},
+ includes: null,
+ counterIncludes: null,
+ // Specify list of document fields (part of document orders/826-A),
+ // where each field is expected to contain the change-vector
+ // of the revision you wish to include.
+ revisionsIncludesByChangeVector: new[]
+ \{
+ "RevisionChangeVectorField1",
+ "RevisionChangeVectorField2"
+ \},
+ revisionIncludeByDateTimeBefore: null,
+ timeSeriesIncludes: null,
+ compareExchangeValueIncludes: null,
+ metadataOnly: false);
+
+ store.GetRequestExecutor().Execute(command, context);
+
+ // Access the included revisions
+ var revisions = (BlittableJsonReaderArray)command.Result.RevisionIncludes;
+
+ var revisionObj = (BlittableJsonReaderObject)revisions[0];
+ var revision = (BlittableJsonReaderObject)revisionObj["Revision"];
+\}
+`}
+
+
+
+
+{`using (var store = new DocumentStore())
+using (store.GetRequestExecutor().ContextPool.AllocateOperationContext(out var context))
+\{
+ // Fetch document orders/826-A and include the specified revisions
+ var command = new GetDocumentsCommand(store.Conventions,
+ ids:new[] \{"orders/826-A"\},
+ includes: null,
+ counterIncludes: null,
+ // Another option is to specify a single document field (part of document orders/826-A).
+ // This field is expected to contain a list of all the change-vectors
+ // for the revisions you wish to include.
+ revisionsIncludesByChangeVector: new[]
+ \{
+ "RevisionsChangeVectors"
+ \},
+ revisionIncludeByDateTimeBefore: null,
+ timeSeriesIncludes: null,
+ compareExchangeValueIncludes: null,
+ metadataOnly: false);
+
+ store.GetRequestExecutor().Execute(command, context);
+
+ // Access the included revisions
+ var revisions = (BlittableJsonReaderArray)command.Result.RevisionIncludes;
+
+ var revisionObj = (BlittableJsonReaderObject)revisions[0];
+ var revision = (BlittableJsonReaderObject)revisionObj["Revision"];
+\}
+`}
+
+
+
+
+
+
+**Include compare-exchange values**:
+
+
+{`using (var store = new DocumentStore())
+using (store.GetRequestExecutor().ContextPool.AllocateOperationContext(out var context))
+\{
+ // Fetch document orders/826-A and include the specified compare-exchange
+ var command = new GetDocumentsCommand(store.Conventions,
+ ids:new[] \{"orders/826-A"\},
+ includes: null,
+ counterIncludes: null,
+ revisionsIncludesByChangeVector: null,
+ revisionIncludeByDateTimeBefore: null,
+ timeSeriesIncludes: null,
+ // Similar to the previous "include revisions" examples,
+ // EITHER:
+ // Specify a list of document fields (part of document orders/826-A),
+ // where each field is expected to contain a compare-exchange KEY
+ // for the compare-exchange item you wish to include
+ // OR:
+ // Specify a single document field that contains a list of all keys to include.
+ compareExchangeValueIncludes: [
+ "CmpXchgItemField1",
+ "CmpXchgItemField2"
+ ],
+ metadataOnly: false);
+
+ store.GetRequestExecutor().Execute(command, context);
+
+ // Access the included compare-exchange items
+ var cmpXchgItems =
+ (BlittableJsonReaderObject)command.Result.CompareExchangeValueIncludes;
+
+ var cmpXchgItemKey = cmpXchgItems.GetPropertyNames()[0]; // The cmpXchg KEY NAME
+ var cmpXchgItemObj = (BlittableJsonReaderObject)cmpXchgItems[cmpXchgItemKey];
+
+ var cmpXchgItemValueObj = (BlittableJsonReaderObject)cmpXchgItemObj["Value"];
+ var cmpXchgItemValue = cmpXchgItemValueObj["Object"]; // The cmpXchg KEY VALUE
+\}
+`}
+
+
+
+
+
+
+## Syntax
+
+
+
+{`// Available overloads:
+// ====================
+
+public GetDocumentsCommand(int start, int pageSize)
+
+public GetDocumentsCommand(DocumentConventions conventions,
+ string id,
+ string[] includes,
+ bool metadataOnly);
+
+public GetDocumentsCommand(DocumentConventions conventions,
+ string[] ids,
+ string[] includes,
+ bool metadataOnly);
+
+public GetDocumentsCommand(DocumentConventions conventions,
+ string[] ids,
+ string[] includes,
+ string[] counterIncludes,
+ IEnumerable timeSeriesIncludes,
+ string[] compareExchangeValueIncludes,
+ bool metadataOnly);
+
+public GetDocumentsCommand(DocumentConventions conventions,
+ string[] ids,
+ string[] includes,
+ string[] counterIncludes,
+ IEnumerable revisionsIncludesByChangeVector,
+ DateTime? revisionIncludeByDateTimeBefore,
+ IEnumerable timeSeriesIncludes,
+ string[] compareExchangeValueIncludes,
+ bool metadataOnly);
+
+public GetDocumentsCommand(DocumentConventions conventions,
+ string[] ids,
+ string[] includes,
+ bool includeAllCounters,
+ IEnumerable timeSeriesIncludes,
+ string[] compareExchangeValueIncludes,
+ bool metadataOnly);
+
+public GetDocumentsCommand(DocumentConventions conventions,
+ string startWith,
+ string startAfter,
+ string matches, string exclude,
+ int start, int pageSize,
+ bool metadataOnly);
+
+public GetDocumentsCommand(int start, int pageSize);
+`}
+
+
+
+| Parameter | Type | Description |
+|-------------------------------------|-----------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **conventions** | `DocumentConventions` | The store's conventions. |
+| **id** | `string` | ID of the document to get. |
+| **ids** | `string[]` | IDs of the documents to get. |
+| **includes** | `string[]` | Related documents to fetch along with the document. |
+| **counterIncludes** | `string[]` | Counters to fetch along with the document. |
+| **includeAllCounters** | `bool` | Whether to include all counters. |
+| **timeSeriesIncludes** | `AbstractTimeSeriesRange[]` | Time series to fetch along with the document. |
+| **compareExchangeValueIncludes** | `string[]` | List of document fields containing cmpXchg keys of the compare-exchange items you wish to include. |
+| **revisionsIncludesByChangeVector** | `string[]` | List of document fields containing change-vectors of the revisions you wish to include. |
+| **revisionIncludeByDateTimeBefore** | `DateTime` | When this date is provided, retrieve the most recent revision that was created before this date value. |
+| **metadataOnly** | `bool` | Whether to fetch the whole document or just the metadata. |
+| **start** | `int` | Number of documents that should be skipped. |
+| **pageSize** | `int` | Maximum number of documents that will be retrieved. |
+| **startsWith** | `string` | Fetch only documents with this prefix. |
+| **startAfter** | `string` | Skip 'document fetching' until the given ID is found, and return documents after that ID (default: null). |
+| **matches** | `string` | Pipe ('|') separated values for which document IDs (after `startsWith`) should be matched. (`?` any single character, `*` any characters). |
+| **exclude** | `string` | Pipe ('|') separated values for which document IDs (after `startsWith`) should Not be matched. (`?` any single character, `*` any characters). |
+
+
+
+{`// The \`GetDocumentCommand\` result:
+// ================================
+
+public class PutResult
+\{
+ public BlittableJsonReaderObject Includes \{ get; set; \}
+ public BlittableJsonReaderArray Results \{ get; set; \}
+ public BlittableJsonReaderObject CounterIncludes \{ get; set; \}
+ public BlittableJsonReaderArray RevisionIncludes \{ get; set; \}
+ public BlittableJsonReaderObject TimeSeriesIncludes \{ get; set; \}
+ public BlittableJsonReaderObject CompareExchangeValueIncludes \{ get; set; \}
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/commands/documents/_get-java.mdx b/versioned_docs/version-7.1/client-api/commands/documents/_get-java.mdx
new file mode 100644
index 0000000000..9dba018881
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/commands/documents/_get-java.mdx
@@ -0,0 +1,260 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+There are a few methods that allow you to retrieve documents from a database:
+
+- [Get single document](../../../client-api/commands/documents/get.mdx#get-single-document)
+- [Get multiple documents](../../../client-api/commands/documents/get.mdx#get-multiple-documents)
+- [Get paged documents](../../../client-api/commands/documents/get.mdx#get-paged-documents)
+- [Get documents by starts with](../../../client-api/commands/documents/get.mdx#get-by-starts-with)
+- [Get metadata only](../../../client-api/commands/documents/get.mdx#get-metadata-only)
+
+## Get single document
+
+**GetDocumentsCommand** can be used to retrieve a single document
+
+### Syntax
+
+
+
+{`public GetDocumentsCommand(String id, String[] includes, boolean metadataOnly)
+`}
+
+
+
+| Parameter | Type | Description |
+|------------------|------------|-----------------------------------------------------------|
+| **id** | `String` | ID of the documents to get. |
+| **includes** | `String[]` | Related documents to fetch along with the document. |
+| **metadataOnly** | `boolean` | Whether to fetch the whole document or just the metadata. |
+
+### Example
+
+
+
+{`GetDocumentsCommand command = new GetDocumentsCommand(
+ "orders/1-A", null, false);
+session.advanced().getRequestExecutor().execute(command);
+ObjectNode order = (ObjectNode) command.getResult().getResults().get(0);
+`}
+
+
+
+
+
+## Get multiple documents
+
+**GetDocumentsCommand** can also be used to retrieve a list of documents.
+
+### Syntax
+
+
+
+{`public GetDocumentsCommand(String[] ids, String[] includes, boolean metadataOnly)
+`}
+
+
+
+| Parameter | Type | Description |
+|------------------|------------|--------------------------------------------------------|
+| **ids** | `String[]` | IDs of the documents to get. |
+| **includes** | `String` | Related documents to fetch along with the documents. |
+| **metadataOnly** | `boolean` | Whether to fetch whole documents or just the metadata. |
+
+### Example I
+
+
+
+{`GetDocumentsCommand command = new GetDocumentsCommand(
+ new String[]\{"orders/1-A", "employees/3-A"\}, null, false);
+session.advanced().getRequestExecutor().execute(command);
+ObjectNode order = (ObjectNode) command.getResult().getResults().get(0);
+ObjectNode employee = (ObjectNode) command.getResult().getResults().get(1);
+`}
+
+
+
+### Example II - Using Includes
+
+
+
+{`// Fetch emploees/5-A and his boss.
+GetDocumentsCommand command = new GetDocumentsCommand(
+ "employees/5-A", new String[]\{"ReportsTo"\}, false);
+session.advanced().getRequestExecutor().execute(command);
+
+ObjectNode employee = (ObjectNode) command.getResult().getResults().get(0);
+String bossId = employee.get("ReportsTo").asText();
+ObjectNode boss = (ObjectNode) command.getResult().getIncludes().get(bossId);
+`}
+
+
+
+### Example III - Missing Documents
+
+
+
+{`// Assuming that products/9999-A doesn't exist.
+GetDocumentsCommand command = new GetDocumentsCommand(
+ new String[]\{"products/1-A", "products/9999-A", "products/3-A"\}, null, false);
+session.advanced().getRequestExecutor().execute(command);
+ArrayNode products = command.getResult().getResults(); // products/1-A, null, products/3-A
+`}
+
+
+
+
+
+## Get paged documents
+
+**GetDocumentsCommand** can also be used to retrieve a paged set of documents.
+
+### Syntax
+
+
+
+{`public GetDocumentsCommand(int start, int pageSize)
+`}
+
+
+
+| Parameter | Type | Description |
+|--------------|-------|------------------------------------------------------|
+| **start** | `int` | Number of documents that should be skipped. |
+| **pageSize** | `int` | Maximum number of documents that will be retrieved. |
+
+### Example
+
+
+
+{`GetDocumentsCommand command = new GetDocumentsCommand(0, 128);
+session.advanced().getRequestExecutor().execute(command);
+ArrayNode firstDocs = command.getResult().getResults();
+`}
+
+
+
+
+
+## Get by starts with
+
+**GetDocumentsCommand** can be used to retrieve multiple documents for a specified ID prefix.
+
+### Syntax
+
+
+
+{`public GetDocumentsCommand(
+ String startWith,
+ String startAfter,
+ String matches,
+ String exclude,
+ int start,
+ int pageSize,
+ boolean metadataOnly)
+`}
+
+
+
+| Parameter | Type | Description |
+|------------------|-----------|--------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **startsWith** | `String` | Prefix for which documents should be returned. |
+| **startAfter** | `String` | Skip 'document fetching' until the given ID is found, and return documents after that ID (default: null). |
+| **matches** | `String` | Pipe ('|') separated values for which document IDs (after 'startsWith') should be matched ('?' any single character, '*' any characters). |
+| **exclude** | `String` | Pipe ('|') separated values for which document IDs (after 'startsWith') should **not** be matched ('?' any single character, '*' any characters). |
+| **start** | `int` | Number of documents that should be skipped. |
+| **pageSize** | `int` | Maximum number of documents that will be retrieved. |
+| **metadataOnly** | `boolean` | Specifies whether or not only document metadata should be returned. |
+
+### Example I
+
+
+
+{`GetDocumentsCommand command = new GetDocumentsCommand(
+ "products", //startWith
+ null, //startAfter
+ null, // matches
+ null, //exclude
+ 0, // start
+ 128, // pageSize
+ false //metadataOnly
+);
+
+session.advanced().getRequestExecutor().execute(command);
+ArrayNode products = command.getResult().getResults();
+`}
+
+
+
+### Example II
+
+
+
+{`// return up to 128 documents with key that starts with 'products/'
+// and rest of the key begins with "1" or "2", eg. products/10, products/25
+GetDocumentsCommand command = new GetDocumentsCommand(
+ "products", //startWith
+ null, // startAfter
+ "1*|2*", // matches
+ null, // exclude
+ 0, //start
+ 128, //pageSize
+ false); //metadataOnly
+`}
+
+
+
+### Example III
+
+
+
+{`// return up to 128 documents with key that starts with 'products/'
+// and rest of the key have length of 3, begins and ends with "1"
+// and contains any character at 2nd position e.g. products/101, products/1B1
+GetDocumentsCommand command = new GetDocumentsCommand(
+ "products", //startWith
+ null, // startAfter
+ "1?1", // matches
+ null, // exclude
+ 0, //start
+ 128, //pageSize
+ false); //metadataOnly
+session.advanced().getRequestExecutor().execute(command);
+ArrayNode products = command.getResult().getResults();
+`}
+
+
+
+
+
+## Get metadata only
+
+**GetDocumentsCommand** can be used to retrieve the metadata of documents.
+
+### Example
+
+
+
+{`GetDocumentsCommand command = new GetDocumentsCommand("orders/1-A", null, true);
+session.advanced().getRequestExecutor().execute(command);
+
+JsonNode result = command.getResult().getResults().get(0);
+ObjectNode documentMetadata = (ObjectNode) result.get("@metadata");
+
+// Print out all the metadata properties.
+Iterator fieldIterator = documentMetadata.fieldNames();
+
+while (fieldIterator.hasNext()) \{
+ String field = fieldIterator.next();
+ JsonNode value = documentMetadata.get(field);
+ System.out.println(field + " = " + value);
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/commands/documents/_get-nodejs.mdx b/versioned_docs/version-7.1/client-api/commands/documents/_get-nodejs.mdx
new file mode 100644
index 0000000000..ecd6b849eb
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/commands/documents/_get-nodejs.mdx
@@ -0,0 +1,495 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use the low-level `GetDocumentsCommand` to retrieve documents from the database.
+
+* To retrieve documents using a higher-level method, see [loading entities](../../../client-api/session/loading-entities.mdx) or [query for documents](../../../client-api/session/querying/how-to-query.mdx).
+
+* In this page:
+ - [Get single document](../../../client-api/commands/documents/get.mdx#get-single-document)
+ - [Get multiple documents](../../../client-api/commands/documents/get.mdx#get-multiple-documents)
+ - [Get metadata only](../../../client-api/commands/documents/get.mdx#get-metadata-only)
+ - [Get paged documents](../../../client-api/commands/documents/get.mdx#get-paged-documents)
+ - [Get documents - by ID prefix](../../../client-api/commands/documents/get.mdx#get-documents---by-id-prefix)
+ - [Get documents - with includes](../../../client-api/commands/documents/get.mdx#get-documents---with-includes)
+ - [Syntax](../../../client-api/commands/documents/get.mdx#syntax)
+
+
+## Get single document
+
+* The following examples demonstrate how to retrieve a document using either the _Store's request executor_
+ or the _Session's request executor_.
+* The examples in the rest of the article use the _Store's request executor_, but you can apply the Session's implementation shown here to ALL cases.
+
+
+**Get document command - using the Store's request executor**:
+
+
+{`// Define the 'GetDocumentsCommand'
+const command = new GetDocumentsCommand(\{
+ conventions: documentStore.conventions,
+ id: "orders/1-A"
+\});
+
+// Call 'execute' on the Store's Request Executor to send the command to the server
+await documentStore.getRequestExecutor().execute(command);
+
+// Access the results
+const order = command.result.results[0];
+const orderedAt = order.OrderedAt;
+`}
+
+
+
+
+
+
+**Get document command - using the Session's request executor**:
+
+
+{`const session = documentStore.openSession();
+
+// Define the 'GetDocumentsCommand'
+const command = new GetDocumentsCommand(\{
+ conventions: documentStore.conventions,
+ id: "orders/1-A"
+\});
+
+// Call 'execute' on the Session's Request Executor to send the command to the server
+await session.advanced.requestExecutor.execute(command);
+
+// Access the results
+const order = command.result.results[0];
+const orderedAt = order.OrderedAt;
+`}
+
+
+
+
+
+
+## Get multiple documents
+
+
+
+**Get multiple documents**:
+
+
+{`// Pass a list of document IDs to the get command
+const command = new GetDocumentsCommand(\{
+ conventions: documentStore.conventions,
+ ids: ["orders/1-A", "employees/2-A", "products/1-A"]
+\});
+
+await documentStore.getRequestExecutor().execute(command);
+
+// Access results
+const order = command.result.results[0];
+const employee = command.result.results[1];
+const product = command.result.results[2];
+`}
+
+
+
+
+
+
+**Get multiple documents - missing documents**:
+
+
+{`// Assuming that employees/9999-A doesn't exist
+const command = new GetDocumentsCommand(\{
+ conventions: documentStore.conventions,
+ ids: [ "orders/1-A", "employees/9999-A", "products/3-A" ]
+\});
+
+await documentStore.getRequestExecutor().execute(command);
+
+// Results will contain 'null' for any missing document
+const results = command.result.results; // orders/1-A, null, products/3-A
+assert.equal(results[1], null);
+`}
+
+
+
+
+
+
+## Get metadata only
+
+
+
+{`// Pass 'true' in the 'metadataOnly' param to retrieve only the document METADATA
+const command = new GetDocumentsCommand(\{
+ conventions: documentStore.conventions,
+ id: "orders/1-A",
+ metadataOnly: true
+\});
+
+await documentStore.getRequestExecutor().execute(command);
+
+// Access results
+const results = command.result.results[0];
+const metadata = results["@metadata"];
+
+// Print out all metadata properties
+for (const propertyName in metadata) \{
+ console.log(\`$\{propertyName\} = $\{metadata[propertyName]\}\`);
+\}
+`}
+
+
+
+
+
+## Get paged documents
+
+* You can retrieve documents in pages by specifying how many documents to skip and how many to fetch.
+* Using this overload, no specific collection is specified, the documents will be fetched from ALL collections.
+
+
+
+{`// Specify the number of documents to skip (start)
+// and the number of documents to get (pageSize)
+const command = new GetDocumentsCommand(\{
+ conventions: documentStore.conventions,
+ start: 0,
+ pageSize: 128
+\});
+
+await documentStore.getRequestExecutor().execute(command);
+
+// The documents are sorted by the last modified date,
+// with the most recent modifications appearing first.
+const firstDocs = command.result.results;
+`}
+
+
+
+
+
+## Get documents - by ID prefix
+
+
+
+**Retrieve documents that match a specified ID prefix**:
+
+
+{`// Return up to 50 documents with ID that starts with 'products/'
+const command = new GetDocumentsCommand(\{
+ conventions: documentStore.conventions,
+ startsWith: "products/",
+ start: 0,
+ pageSize: 50
+\});
+
+await documentStore.getRequestExecutor().execute(command);
+
+// Access a Product document
+const product = command.result.results[0];
+`}
+
+
+
+
+
+
+**Retrieve documents that match a specified ID prefix - with "matches" pattern**:
+
+
+{`// Return up to 50 documents with IDs that start with 'orders/'
+// and the rest of the ID either begins with '23',
+// or contains any character at the 1st position and ends with '10-A'
+// e.g. orders/234-A, orders/810-A
+const command = new GetDocumentsCommand(\{
+ conventions: documentStore.conventions,
+ startsWith: "orders/",
+ matches: "23*|?10-A",
+ start: 0,
+ pageSize: 50
+\});
+
+await documentStore.getRequestExecutor().execute(command);
+
+// Access an Order document
+const order = command.result.results[0];
+
+const orderId = order["@metadata"]["@id"];
+assert.ok(orderId.startsWith("orders/23") || /^orders\\/.\{1\}10-A$/.test(orderId));
+`}
+
+
+
+
+
+
+**Retrieve documents that match a specified ID prefix - with "exclude" pattern**:
+
+
+{`// Return up to 50 documents with IDs that start with 'orders/'
+// and the rest of the ID excludes documents ending with '10-A',
+// e.g. will return orders/820-A, but not orders/810-A
+const command = new GetDocumentsCommand(\{
+ conventions: documentStore.conventions,
+ startsWith: "orders/",
+ exclude: "*10-A",
+ start: 0,
+ pageSize: 50
+\});
+
+await documentStore.getRequestExecutor().execute(command);
+
+// Access an Order document
+const order = command.result.results[0];
+
+const orderId = order["@metadata"]["@id"];
+assert.ok(orderId.startsWith("orders/") && !orderId.endsWith("10-A"));
+`}
+
+
+
+
+
+
+## Get documents - with includes
+
+
+
+**Include related documents**:
+
+
+{`// Fetch document products/77-A and include its related Supplier document
+const command = new GetDocumentsCommand(\{
+ conventions: documentStore.conventions,
+ id: "products/77-A",
+ includes: [ "Supplier" ]
+\});
+
+await documentStore.getRequestExecutor().execute(command);
+
+// Access the related document that was included
+const product = command.result.results[0];
+const supplierId = product["Supplier"];
+const supplier = command.result.includes[supplierId];
+`}
+
+
+
+
+
+
+**Include counters**:
+
+
+{`// Fetch document products/77-A and include the specified counters
+const command = new GetDocumentsCommand(\{
+ conventions: documentStore.conventions,
+ id: "products/77-A",
+ // Pass the names of the counters to include. In this example,
+ // the counter names in RavenDB's sample data are stars...
+ counterIncludes: ["⭐", "⭐⭐"]
+\});
+
+await documentStore.getRequestExecutor().execute(command);
+
+// Access the included counters results
+const counters = command.result.counterIncludes["products/77-A"]
+const counter = counters[0];
+
+const counterName = counter["counterName"];
+const counterValue = counter["totalValue"];
+`}
+
+
+
+
+
+
+**Include time series**:
+
+
+{`// Fetch document employees/1-A and include the specified time series
+const command = new GetDocumentsCommand(\{
+ conventions: documentStore.conventions,
+ ids: ["employees/1-A"],
+ // Specify the time series name and the time range
+ timeSeriesIncludes: [
+ \{
+ name: "HeartRates",
+ from: new Date("2020-04-01T00:00:00.000Z"),
+ to: new Date("2024-12-31T00:00:00.000Z")
+ \}
+ ]
+\});
+
+await documentStore.getRequestExecutor().execute(command);
+
+// Access the included time series results
+const timeseries = command.result.timeSeriesIncludes["employees/1-A"];
+const tsEntries = timeseries["HeartRates"][0].entries;
+
+const entryTimeStamp = tsEntries[0].timestamp;
+const entryValues = tsEntries[0].values;
+`}
+
+
+
+
+
+
+**Include revisions**:
+
+
+{`// Fetch document orders/826-A and include the specified revisions
+const command = new GetDocumentsCommand(\{
+ conventions: documentStore.conventions,
+ ids: ["orders/826-A"],
+ // Specify list of document fields (part of document orders/826-A),
+ // where each field is expected to contain the change-vector
+ // of the revision you wish to include.
+ revisionsIncludesByChangeVector: [
+ "RevisionChangeVectorField1",
+ "RevisionChangeVectorField2"
+ ]
+\});
+
+await documentStore.getRequestExecutor().execute(command);
+
+// Access the included revisions
+const revisionObj = command.result.revisionIncludes[0];
+const revision = revisionObj.Revision;
+`}
+
+
+
+
+{`// Fetch document orders/826-A and include the specified revisions
+const command = new GetDocumentsCommand(\{
+ conventions: documentStore.conventions,
+ ids: ["orders/826-A"],
+ // Another option is to specify a single document field (part of document orders/826-A).
+ // This field is expected to contain a list of all the change-vectors
+ // for the revisions you wish to include.
+ revisionsIncludesByChangeVector: [
+ "RevisionsChangeVectors"
+ ]
+\});
+
+await documentStore.getRequestExecutor().execute(command);
+
+// Access the included revisions
+const revisionObj = command.result.revisionIncludes[0];
+const revision = revisionObj.Revision;
+`}
+
+
+
+
+
+
+**Include compare-exchange values**:
+
+
+{`// Fetch document orders/826-A and include the specified compare-exchange
+const command = new GetDocumentsCommand(\{
+ conventions: documentStore.conventions,
+ ids: ["orders/826-A"],
+ // Similar to the previous "include revisions" examples,
+ // EITHER:
+ // Specify a list of document fields (part of document orders/826-A),
+ // where each field is expected to contain a compare-exchange KEY
+ // for the compare-exchange item you wish to include
+ // OR:
+ // Specify a single document field that contains a list of all keys to include.
+ compareExchangeValueIncludes: [
+ "CmpXchgItemField1",
+ "CmpXchgItemField2"
+ ]
+\});
+
+await documentStore.getRequestExecutor().execute(command);
+
+// Access the included compare-exchange items
+const cmpXchgItems = command.result.compareExchangeValueIncludes;
+
+const cmpXchgItemKey = Object.keys(cmpXchgItems)[0];
+const cmpXchgItemValue = cmpXchgItem[cmpXchgItemKey].value.Object;
+`}
+
+
+
+
+
+
+## Syntax
+
+
+
+{`// Available overloads:
+// ====================
+
+new GetDocumentsCommand(\{
+ conventions, id,
+ includes?, counterIncludes?, includeAllCounters?, metadataOnly?
+\});
+
+new GetDocumentsCommand(\{
+ conventions, ids,
+ includes?, timeSeriesIncludes?, compareExchangeValueIncludes?,
+ revisionsIncludesByChangeVector?, revisionIncludeByDateTimeBefore?,
+ counterIncludes?, includeAllCounters?, metadataOnly?
+\});
+
+new GetDocumentsCommand(\{
+ conventions, start, pageSize,
+ startsWith?, startsAfter?, matches?, exclude?,
+ counterIncludes?, includeAllCounters?, metadataOnly?
+\});
+`}
+
+
+
+| Parameter | Type | Description |
+|-------------------------------------|-----------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **conventions** | `DocumentConventions` | The store's conventions. |
+| **id** | `string` | ID of the document to get. |
+| **ids** | `string[]` | IDs of the documents to get. |
+| **includes** | `string[]` | Related documents to fetch along with the document. |
+| **counterIncludes** | `string[]` | Counters to fetch along with the document. |
+| **includeAllCounters** | `boolean` | Whether to include all counters. |
+| **timeSeriesIncludes** | `AbstractTimeSeriesRange[]` | Time series to fetch along with the document. |
+| **compareExchangeValueIncludes** | `string[]` | List of document fields containing cmpXchg keys of the compare-exchange items you wish to include. |
+| **revisionsIncludesByChangeVector** | `string[]` | List of document fields containing the change-vectors of the revisions you wish to include. |
+| **revisionIncludeByDateTimeBefore** | `Date` | When this date is provided, retrieve the most recent revision that was created before this date value. |
+| **metadataOnly** | `boolean` | Whether to fetch the whole document or just the metadata. |
+| **start** | `number` | Number of documents that should be skipped. |
+| **pageSize** | `number` | Maximum number of documents that will be retrieved. |
+| **startsWith** | `string` | Fetch only documents with this prefix. |
+| **startAfter** | `string` | Skip 'document fetching' until the given ID is found, and return documents after that ID (default: null). |
+| **matches** | `string` | Pipe ('|') separated values for which document IDs (after `startsWith`) should be matched. (`?` any single character, `*` any characters). |
+| **exclude** | `string` | Pipe ('|') separated values for which document IDs (after `startsWith`) should Not be matched. (`?` any single character, `*` any characters). |
+
+
+
+{`// The \`GetDocumentCommand\` result object:
+// =======================================
+
+\{
+ includes; // object
+ results; // any[]
+ counterIncludes; // object
+ revisionIncludes; // any[]
+ timeSeriesIncludes; // object
+ compareExchangeValueIncludes; // object
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/commands/documents/_get-php.mdx b/versioned_docs/version-7.1/client-api/commands/documents/_get-php.mdx
new file mode 100644
index 0000000000..0b84b3040e
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/commands/documents/_get-php.mdx
@@ -0,0 +1,255 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `GetDocumentsCommand` to retrieve documents from the database.
+
+* In this page:
+ - [Get single document](../../../client-api/commands/documents/get.mdx#get-single-document)
+ - [Get multiple documents](../../../client-api/commands/documents/get.mdx#get-multiple-documents)
+ - [Get paged documents](../../../client-api/commands/documents/get.mdx#get-paged-documents)
+ - [Get documents by ID prefix](../../../client-api/commands/documents/get.mdx#get-documents-by-id-prefix)
+
+
+## Get single document
+
+**GetDocumentsCommand** can be used to retrieve a single document
+
+#### Syntax:
+
+
+
+{`public static function forSingleDocument(string $id, StringArray|array|null $includes = null, bool $metadataOnly = false): GetDocumentsCommand;
+`}
+
+
+
+| Parameters | Type | Description |
+|------------|------|-------------|
+| **id** | `string` | ID of the documents to get |
+| **includes** | `StringArray` or `array` or `null` | Related documents to fetch along with the document |
+| **metadataOnly** | `bool` | Whether to fetch the whole document or just its metadata. |
+#### Example:
+
+
+
+{`$command = GetDocumentsCommand::forSingleDocument("orders/1-A", null, false);
+$session->advanced()->getRequestExecutor()->execute($command);
+
+/** @var GetDocumentsResult $documentsResult */
+$documentsResult = $command->getResult();
+$order = $documentsResult->getResults()[0];
+`}
+
+
+
+
+
+## Get multiple documents
+
+**GetDocumentsCommand** can also be used to retrieve a list of documents.
+
+#### Syntax:
+
+
+
+{`public static function forMultipleDocuments(StringArray|array|null $ids, StringArray|array|null $includes, bool $metadataOnly = false): GetDocumentsCommand;
+`}
+
+
+
+| Parameters | Type | Description |
+|------------|------|-------------|
+| **ids** | `StringArray` or `array` or `null` | IDs of the documents to get |
+| **includes** | `StringArray` or `array` or `null` | Related documents to fetch along with the documents |
+| **metadataOnly** | `bool` | Whether to fetch whole documents or just the metadata |
+#### Example I
+
+
+
+{`$command = GetDocumentsCommand::forMultipleDocuments(["orders/1-A", "employees/3-A"], null, false);
+$session->advanced()->getRequestExecutor()->execute($command);
+
+/** @var GetDocumentsResult $result */
+$result = $command->getResult();
+$order = $result->getResults()[0];
+$employee = $result->getResults()[1];
+`}
+
+
+
+#### Example II - Using Includes
+
+
+
+{`// Fetch employees/5-A and his boss.
+$command = GetDocumentsCommand::forSingleDocument("employees/5-A", [ "ReportsTo" ], false);
+$session->advanced()->getRequestExecutor()->execute($command);
+/** @var GetDocumentsResult $result */
+$result = $command->getResult();
+$employee = $result->getResults()[0];
+if (array_key_exists("ReportsTo", $employee)) \{
+ $bossId = $employee["ReportsTo"];
+
+ $boss = $result->getIncludes()[$bossId];
+\}
+`}
+
+
+
+#### Example III - Missing Documents
+
+
+
+{`// Assuming that products/9999-A doesn't exist.
+$command = GetDocumentsCommand::forMultipleDocuments([ "products/1-A", "products/9999-A", "products/3-A" ], null, false);
+$session->advanced()->getRequestExecutor()->execute($command);
+
+/** @var GetDocumentsResult $result */
+$result = $command->getResult();
+$products = $result->getResults(); // products/1-A, null, products/3-A
+`}
+
+
+
+
+
+## Get paged documents
+
+**GetDocumentsCommand** can also be used to retrieve a paged set of documents.
+
+#### Syntax:
+
+
+
+{`public static function withStartAndPageSize(int $start, int $pageSize): GetDocumentsCommand;
+`}
+
+
+
+| Parameters | Type | Description |
+|------------|------|-------------|
+| **start** | `int` | number of documents that should be skipped |
+| **pageSize** | `int` | maximum number of documents that will be retrieved |
+#### Example:
+
+
+
+{`$command = GetDocumentsCommand::withStartAndPageSize(0, 128);
+$session->advanced()->getRequestExecutor()->execute($command);
+
+/** @var GetDocumentsResult $result */
+$result = $command->getResult();
+$firstDocs = $result->getResults();
+`}
+
+
+
+
+
+## Get documents by ID prefix
+
+**GetDocumentsCommand** can be used to retrieve multiple documents for a specified ID prefix.
+
+#### Syntax:
+
+
+
+{`public static function withStartWith(
+ ?string $startWith,
+ ?string $startAfter,
+ ?string $matches,
+ ?string $exclude,
+ int $start,
+ int $pageSize,
+ bool $metadataOnly
+): GetDocumentsCommand;
+`}
+
+
+
+| Parameters | Type | Description |
+|------------|------|-------------|
+| **startWith** | `?string` | prefix for which documents should be returned |
+| **startAfter** | `?string` | skip 'document fetching' until the given ID is found, and return documents after that ID (default: None) |
+| **matches** | `?string` | pipe ('|') separated values for which document IDs (after `startWith`) should be matched ('?' any single character, '*' any characters) |
+| **exclude** | `?string` | pipe ('|') separated values for which document IDs (after `startWith`) should **not** be matched ('?' any single character, '*' any characters) |
+| **start** | `int` | number of documents that should be skipped |
+| **pageSize** | `int` | maximum number of documents that will be retrieved |
+| **metadataOnly** | `bool` | specifies whether or not only document metadata should be returned |
+#### Example I
+
+
+
+{`// return up to 128 documents with key that starts with 'products'
+$command = GetDocumentsCommand::withStartWith(
+ startWith: "products",
+ startAfter: null,
+ matches: null,
+ exclude: null,
+ start: 0,
+ pageSize: 128,
+ metadataOnly: false);
+$session->advanced()->getRequestExecutor()->execute($command);
+
+/** @var GetDocumentsResult $result */
+$result = $command->getResult();
+$products = $result->getResults();
+`}
+
+
+
+#### Example II
+
+
+
+{`// return up to 128 documents with key that starts with 'products/'
+// and rest of the key begins with "1" or "2" e.g. products/10, products/25
+$command = GetDocumentsCommand::withStartWith(
+ startWith: "products",
+ startAfter: null,
+ matches: "1*|2*",
+ exclude: null,
+ start: 0,
+ pageSize: 128,
+ metadataOnly: false);
+$session->advanced()->getRequestExecutor()->execute($command);
+
+/** @var GetDocumentsResult $result */
+$result = $command->getResult();
+$products = $result->getResults();
+`}
+
+
+
+#### Example III
+
+
+
+{`// return up to 128 documents with key that starts with 'products/'
+// and rest of the key have length of 3, begins and ends with "1"
+// and contains any character at 2nd position e.g. products/101, products/1B1
+$command = GetDocumentsCommand::withStartWith(
+ startWith: "products",
+ startAfter: null,
+ matches: "1?1",
+ exclude: null,
+ start: 0,
+ pageSize: 128,
+ metadataOnly: false);
+
+$session->advanced()->getRequestExecutor()->execute($command);
+
+/** @var GetDocumentsResult $result */
+$result = $command->getResult();
+$products = $result->getResults();
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/commands/documents/_get-python.mdx b/versioned_docs/version-7.1/client-api/commands/documents/_get-python.mdx
new file mode 100644
index 0000000000..1a94160777
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/commands/documents/_get-python.mdx
@@ -0,0 +1,225 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `GetDocumentsCommand` to retrieve documents from the database.
+
+* In this page:
+ - [Get single document](../../../client-api/commands/documents/get.mdx#get-single-document)
+ - [Get multiple documents](../../../client-api/commands/documents/get.mdx#get-multiple-documents)
+ - [Get paged documents](../../../client-api/commands/documents/get.mdx#get-paged-documents)
+ - [Get documents by ID prefix](../../../client-api/commands/documents/get.mdx#get-documents-by-id-prefix)
+
+
+## Get single document
+
+**GetDocumentsCommand** can be used to retrieve a single document
+
+#### Syntax:
+
+
+
+{`# GetDocumentsCommand.from_single_id(...)
+@classmethod
+def from_single_id(
+ cls, key: str, includes: List[str] = None, metadata_only: bool = None
+) -> GetDocumentsCommand: ...
+`}
+
+
+
+| Parameter | Type | Description |
+|-------------------|-------------|-----------------------------------------------------------|
+| **key** | `str` | ID of the documents to get. |
+| **includes** | `List[str]` | Related documents to fetch along with the document. |
+| **metadata_only** | `bool` | Whether to fetch the whole document or just the metadata. |
+#### Example:
+
+
+
+{`command = GetDocumentsCommand.from_single_id("orders/1-A", None, False)
+session.advanced.request_executor.execute_command(command)
+order = command.result.results[0]
+`}
+
+
+
+
+
+## Get multiple documents
+
+**GetDocumentsCommand** can also be used to retrieve a list of documents.
+
+#### Syntax:
+
+
+
+{`# GetDocumentsCommand.from_multiple_ids(...)
+@classmethod
+def from_multiple_ids(
+ cls,
+ keys: List[str],
+ includes: List[str] = None,
+ counter_includes: List[str] = None,
+ time_series_includes: List[str] = None,
+ compare_exchange_value_includes: List[str] = None,
+ metadata_only: bool = False,
+) -> GetDocumentsCommand: ...
+`}
+
+
+
+| Parameter | Type | Description |
+|-------------------|-------------|--------------------------------------------------------|
+| **keys** | `List[str]` | IDs of the documents to get. |
+| **includes** | `List[str]` | Related documents to fetch along with the documents. |
+| **metadata_only** | `bool` | Whether to fetch whole documents or just the metadata. |
+#### Example I
+
+
+
+{`command = GetDocumentsCommand.from_multiple_ids(["orders/1-A", "employees/3-A"])
+session.advanced.request_executor.execute_command(command)
+order = command.result.results[0]
+employee = command.result.results[1]
+`}
+
+
+
+#### Example II - Using Includes
+
+
+
+{`# Fetch employees/5-A and his boss.
+command = GetDocumentsCommand.from_single_id("employees/5-A", ["ReportsTo"], False)
+session.advanced.request_executor.execute_command(command)
+employee = command.result.results[0]
+boss = command.result.includes.get(employee.get("ReportsTo", None), None)
+`}
+
+
+
+#### Example III - Missing Documents
+
+
+
+{`# Assuming that products/9999-A doesn't exist
+command = GetDocumentsCommand.from_multiple_ids(["products/1-A", "products/9999-A", "products/3-A"])
+session.advanced.request_executor.execute_command(command)
+products = command.result.results # products/1-A, products/3-A
+`}
+
+
+
+
+
+## Get paged documents
+
+**GetDocumentsCommand** can also be used to retrieve a paged set of documents.
+
+#### Syntax:
+
+
+
+{`# GetDocumentsCommand.from_paging(...)
+@classmethod
+def from_paging(cls, start: int, page_size: int) -> GetDocumentsCommand: ...
+`}
+
+
+
+| Parameter | Type | Description |
+|---------------|-------|-----------------------------------------------------|
+| **start** | `int` | Number of documents that should be skipped. |
+| **page_size** | `int` | Maximum number of documents that will be retrieved. |
+#### Example:
+
+
+
+{`command = GetDocumentsCommand.from_paging(0, 128)
+session.advanced.request_executor.execute_command(command)
+first_docs = command.result.results
+`}
+
+
+
+
+
+## Get documents by ID prefix
+
+**GetDocumentsCommand** can be used to retrieve multiple documents for a specified ID prefix.
+
+#### Syntax:
+
+
+
+{`# GetDocumentsCommand.from_starts_with(...)
+@classmethod
+def from_starts_with(
+ cls,
+ start_with: str,
+ start_after: str = None,
+ matches: str = None,
+ exclude: str = None,
+ start: int = None,
+ page_size: int = None,
+ metadata_only: bool = None,
+) -> GetDocumentsCommand: ...
+`}
+
+
+
+| Parameter | Type | Description |
+|-------------------|--------|--------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **start_with** | `str` | Prefix for which documents should be returned. |
+| **start_after** | `str` | Skip 'document fetching' until the given ID is found, and return documents after that ID (default: None). |
+| **matches** | `str` | Pipe ('|') separated values for which document IDs (after `start_with`) should be matched ('?' any single character, '*' any characters). |
+| **exclude** | `str` | Pipe ('|') separated values for which document IDs (after `start_with`) should **not** be matched ('?' any single character, '*' any characters). |
+| **start** | `int` | Number of documents that should be skipped. |
+| **page_size** | `int` | Maximum number of documents that will be retrieved. |
+| **metadata_only** | `bool` | Specifies whether or not only document metadata should be returned. |
+#### Example I
+
+
+
+{`# return up to 128 documents with key that starts with 'products'
+command = GetDocumentsCommand.from_starts_with("products", start=0, page_size=128)
+session.advanced.request_executor.execute_command(command)
+products = command.result.results
+`}
+
+
+
+#### Example II
+
+
+
+{`# return up to 128 documents with key that starts with 'products/'
+# and rest of the key begins with "1" or "2" e.g. products/10, products/25
+commands = GetDocumentsCommand.from_starts_with("products", matches="1*2|2*", start=0, page_size=128)
+session.advanced.request_executor.execute_command(command)
+products = command.result.results
+`}
+
+
+
+#### Example III
+
+
+
+{`# return up to 128 documents with key that starts with 'products/'
+# and rest of the key have length of 3, begins and ends with "1"
+# and contains any character at 2nd position e.g. products/101, products/1B1
+commands = GetDocumentsCommand.from_starts_with("products", matches="1?1", start=0, page_size=128)
+session.advanced.request_executor.execute_command(command)
+products = command.result.results
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/commands/documents/_put-csharp.mdx b/versioned_docs/version-7.1/client-api/commands/documents/_put-csharp.mdx
new file mode 100644
index 0000000000..158a252a6a
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/commands/documents/_put-csharp.mdx
@@ -0,0 +1,202 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use the low-level `PutDocumentCommand` to insert a new document to the database or update an existing document.
+
+* When using `PutDocumentCommand`, you must explicitly **specify the collection** to which the document will belong,
+ otherwise, the document will be placed in the `@empty` collection. See how this is done in the example below.
+
+* To insert a document to the database using a higher-level method, see [storing entities](../../../client-api/session/storing-entities.mdx).
+ To update an existing document using a higher-level method, see [update entities](../../../client-api/session/updating-entities.mdx).
+
+* In this page:
+
+ * [Examples](../../../client-api/commands/documents/put.mdx#examples)
+ * [Syntax](../../../client-api/commands/documents/put.mdx#syntax)
+
+
+## Examples
+
+
+
+**Put document command - using the Store's request executor**:
+
+
+
+{`using (var store = new DocumentStore())
+using (store.GetRequestExecutor().ContextPool.AllocateOperationContext(out var context))
+{
+ // Define the document to 'put' as a blittable object
+ var blittableDocument = context.ReadObject(new DynamicJsonValue()
+ {
+ ["@metadata"] = new DynamicJsonValue()
+ {
+ ["@collection"] = "Categories"
+ },
+ ["Name"] = "My category",
+ ["Description"] = "My category description"
+ }, "categories/999");
+
+ // Define the PutDocumentCommand
+ var command = new PutDocumentCommand(store.Conventions,
+ "categories/999", null, blittableDocument);
+
+ // Call 'Execute' on the Store Request Executor to send the command to the server
+ store.GetRequestExecutor().Execute(command, context);
+
+ // Access the command result
+ var putResult = command.Result;
+ var theDocumentID = putResult.Id;
+ var theDocumentCV = putResult.ChangeVector;
+}
+`}
+
+
+
+
+{`using (var store = new DocumentStore())
+using (store.GetRequestExecutor().ContextPool.AllocateOperationContext(out var context))
+{
+ // Define the document to 'put' as a blittable object
+ var blittableDocument = context.ReadObject(new DynamicJsonValue()
+ {
+ ["@metadata"] = new DynamicJsonValue()
+ {
+ ["@collection"] = "Categories"
+ },
+ ["Name"] = "My category",
+ ["Description"] = "My category description"
+ }, "categories/999");
+
+ // Define the PutDocumentCommand
+ var command = new PutDocumentCommand(store.Conventions,
+ "categories/999", null, blittableDocument);
+
+ // Call 'ExecuteAsync' on the Store Request Executor to send the command to the server
+ await store.GetRequestExecutor().ExecuteAsync(command, context);
+
+ // Access the command result
+ var putResult = command.Result;
+ var theDocumentID = putResult.Id;
+ var theDocumentCV = putResult.ChangeVector;
+}
+`}
+
+
+
+
+
+
+
+**Put document command - using the Session's request executor**:
+
+
+
+{`// Create a new document entity
+var doc = new Category
+{
+ Name = "My category",
+ Description = "My category description"
+};
+
+// Specify the collection to which the document will belong
+var docInfo = new DocumentInfo
+{
+ Collection = "Categories"
+};
+
+// Convert your entity to a BlittableJsonReaderObject
+var blittableDocument = session.Advanced.JsonConverter.ToBlittable(doc, docInfo);
+
+// Define the PutDocumentCommand
+var command = new PutDocumentCommand(store.Conventions,
+ "categories/999", null, blittableDocument);
+
+// Call 'Execute' on the Session Request Executor to send the command to the server
+session.Advanced.RequestExecutor.Execute(command, session.Advanced.Context);
+
+// Access the command result
+var putResult = command.Result;
+var theDocumentID = putResult.Id;
+var theDocumentCV = putResult.ChangeVector;
+`}
+
+
+
+
+{`// Create a new document entity
+var doc = new Category
+{
+ Name = "My category",
+ Description = "My category description"
+};
+
+// Specify the collection to which the document will belong
+var docInfo = new DocumentInfo
+{
+ Collection = "Categories"
+};
+
+// Convert your entity to a BlittableJsonReaderObject
+var blittableDocument = asyncSession.Advanced.JsonConverter.ToBlittable(doc, docInfo);
+
+// Define the PutDocumentCommand
+var command = new PutDocumentCommand(store.Conventions,
+ "categories/999", null, blittableDocument);
+
+// Call 'Execute' on the Session Request Executor to send the command to the server
+await asyncSession.Advanced.RequestExecutor.ExecuteAsync(
+ command, asyncSession.Advanced.Context);
+
+// Access the command result
+var putResult = command.Result;
+var theDocumentID = putResult.Id;
+var theDocumentCV = putResult.ChangeVector;
+`}
+
+
+
+
+
+
+
+## Syntax
+
+
+
+{`public PutDocumentCommand(DocumentConventions conventions,
+ string id, string changeVector, BlittableJsonReaderObject document)
+`}
+
+
+
+| Parameter | Type | Description |
+|------------------|-----------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **id** | `string` | Unique ID under which document will be stored. |
+| **changeVector** | `string` | The change-vector of the document you wish to update, used for [optimistic concurrency control](../../../server/clustering/replication/change-vector.mdx#concurrency-control--change-vectors). Pass `null` to skip the check and force the 'put'. |
+| **document** | `BlittableJsonReaderObject` | The document to store. Use: `session.Advanced.JsonConverter.ToBlittable(doc, docInfo);` to convert your entity to a `BlittableJsonReaderObject`. |
+
+
+
+{`// The \`PutDocumentCommand\` result:
+// ================================
+
+public class PutResult
+\{
+ /// The ID under which document was stored
+ public string Id \{ get; set; \}
+
+ // The changeVector that was assigned to the stored document
+ public string ChangeVector \{ get; set; \}
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/commands/documents/_put-java.mdx b/versioned_docs/version-7.1/client-api/commands/documents/_put-java.mdx
new file mode 100644
index 0000000000..3ad69808aa
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/commands/documents/_put-java.mdx
@@ -0,0 +1,41 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+**Put** is used to insert or update a document in a database.
+
+## Syntax
+
+
+
+{`public PutDocumentCommand(String id, String changeVector, ObjectNode document)
+`}
+
+
+
+| Parameter | Type | Description |
+|------------------|--------------|----------------------------------------------------------------------------------------------------------------------------------------|
+| **id** | `String` | Unique ID under which document will be stored |
+| **changeVector** | `String` | Entity changeVector, used for concurrency checks (`null` to skip check) |
+| **document** | `ObjectNode` | The document to store. You may use `session.advanced().getEntityToJson().convertEntityToJson` to convert your entity to a `ObjectNode` |
+
+## Example
+
+
+
+{`Category doc = new Category();
+doc.setName("My category");
+doc.setDescription("My category description");
+
+DocumentInfo docInfo = new DocumentInfo();
+docInfo.setCollection("Categories");
+
+ObjectNode jsonDoc = session.advanced().getEntityToJson().convertEntityToJson(doc, docInfo);
+PutDocumentCommand command = new PutDocumentCommand("categories/999", null, jsonDoc);
+session.advanced().getRequestExecutor().execute(command);
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/commands/documents/_put-nodejs.mdx b/versioned_docs/version-7.1/client-api/commands/documents/_put-nodejs.mdx
new file mode 100644
index 0000000000..cbffcb8a25
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/commands/documents/_put-nodejs.mdx
@@ -0,0 +1,132 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use the low-level `PutDocumentCommand` to insert a new document to the database or update an existing document.
+
+* When using `PutDocumentCommand`, you must explicitly **specify the collection** to which the document will belong,
+ otherwise, the document will be placed in the `@empty` collection. See how this is done in the example below.
+
+* To insert a document to the database using a higher-level method, see [storing entities](../../../client-api/session/storing-entities.mdx).
+ To update an existing document using a higher-level method, see [update entities](../../../client-api/session/updating-entities.mdx).
+
+* In this page:
+
+ * [Examples](../../../client-api/commands/documents/put.mdx#examples)
+ * [Syntax](../../../client-api/commands/documents/put.mdx#syntax)
+
+
+## Examples
+
+
+
+**Put document command - using the Store's request executor**:
+
+
+{`// Define the json document to 'put'
+const jsonDocument = \{
+ name: "My category",
+ description: "My category description",
+ "@metadata": \{
+ "@collection": "categories"
+ \}
+\}
+
+// Define the 'PutDocumentCommand'
+// Pass the document ID, whether to make concurrency checks,
+// and the json document to store
+const command = new PutDocumentCommand("categories/999", null, jsonDocument);
+
+// Call 'execute' on the Store Request Executor to send the command to the server
+await documentStore.getRequestExecutor().execute(command);
+
+// Access the command result
+const result = command.result;
+const theDocumentID = result.id;
+const theDocumentCV = result.changeVector;
+
+assert.strictEqual(theDocumentID, "categories/999");
+`}
+
+
+
+
+
+
+**Put document command - using the Session's request executor**:
+
+
+{`const session = documentStore.openSession();
+
+// Create a new entity
+const category = new Category();
+category.name = "My category";
+category.description = "My category description";
+
+// To be able to specify under which collection the document should be stored
+// you need to convert the entity to a json document first.
+
+// Passing the entity as is instead of the json document
+// will result in storing the document under the "@empty" collection.
+
+const documentInfo = new DocumentInfo();
+documentInfo.collection = "categories"; // The target collection
+const jsonDocument = EntityToJson.convertEntityToJson(
+ category, documentStore.conventions, documentInfo);
+
+// Define the 'PutDocumentCommand'
+// Pass the document ID, whether to make concurrency checks,
+// and the json document to store
+const command = new PutDocumentCommand("categories/999", null, jsonDocument);
+
+// Call 'execute' on the Session Request Executor to send the command to the server
+await session.advanced.requestExecutor.execute(command);
+
+// Access the command result
+const result = command.result;
+const theDocumentID = result.id;
+const theDocumentCV = result.changeVector;
+
+assert.strictEqual(theDocumentID, "categories/999");
+`}
+
+
+
+
+
+
+## Syntax
+
+
+
+{`PutDocumentCommand(id, changeVector, document);
+`}
+
+
+
+| Parameter | Type | Description |
+|------------------|----------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **id** | `string` | Unique ID under which document will be stored. |
+| **changeVector** | `string` | The change-vector of the document you wish to update, used for [optimistic concurrency control](../../../server/clustering/replication/change-vector.mdx#concurrency-control--change-vectors). Pass `null` to skip the check and force the 'put'. |
+| **document** | `object` | The document to store. |
+
+
+
+{`// Executing \`PutDocumentCommand\` returns the following object:
+\{
+ // The document id under which the entity was stored
+ id; // string
+
+ // The change vector assigned to the stored document
+ changeVector; // string
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/commands/documents/_put-php.mdx b/versioned_docs/version-7.1/client-api/commands/documents/_put-php.mdx
new file mode 100644
index 0000000000..eda40e730d
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/commands/documents/_put-php.mdx
@@ -0,0 +1,59 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `PutDocumentCommand` to insert a document to the database or update an existing document.
+
+* In this page:
+
+ * [Example](../../../client-api/commands/documents/put.mdx#example)
+ * [Syntax](../../../client-api/commands/documents/put.mdx#syntax)
+
+
+## Example
+
+
+
+{`// Create a new document
+$doc = new Category();
+$doc->setName("My category");
+$doc->setDescription("My category description");
+
+// Create metadata on the document
+$docInfo = new DocumentInfo();
+$docInfo->setCollection("Categories");
+
+// Convert your entity to a BlittableJsonReaderObject
+$jsonDoc = $session->advanced()->getEntityToJson()->convertEntityToJson($doc, $docInfo);
+
+// The Put command (parameters are document ID, changeVector check is null, the document to store)
+$command = new PutDocumentCommand("categories/999", null, $jsonDoc);
+// RequestExecutor sends the command to the server
+$session->advanced()->getRequestExecutor()->execute($command);
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`PutDocumentCommand(string $idOrCopy, ?string $changeVector, array $document);
+`}
+
+
+
+| Parameters | Type | Description |
+| ------------- | ------------- | ----- |
+| **idOrCopy** | `string` | Unique ID under which document will be stored |
+| **changeVector** | `string` (optional) | Entity changeVector, used for concurrency checks (`None` to skip check) |
+| **document** | `array` | The document to store |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/commands/documents/_put-python.mdx b/versioned_docs/version-7.1/client-api/commands/documents/_put-python.mdx
new file mode 100644
index 0000000000..241e6e9fe9
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/commands/documents/_put-python.mdx
@@ -0,0 +1,57 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `PutDocumentCommand` to insert a document to the database or update an existing document.
+
+* In this page:
+
+ * [Example](../../../client-api/commands/documents/put.mdx#example)
+ * [Syntax](../../../client-api/commands/documents/put.mdx#syntax)
+
+
+## Example
+
+
+
+{`# Create a new document
+doc = Category(name="My category", description="My category description")
+
+# Create metadata on the document
+doc_info = DocumentInfo(collection="Categories")
+
+# Convert your entity to a dict
+dict_doc = session.entity_to_json.convert_entity_to_json_static(doc, session.conventions, doc_info)
+
+# The put command (parameters are document ID, change vector check is None, the document to store)
+command = PutDocumentCommand("employees/1-A", None, dict_doc)
+# Request executor sends the command to the server
+session.advanced.request_executor.execute_command(command)
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`class PutDocumentCommand(RavenCommand[PutResult]):
+ def __init__(self, key: str, change_vector: Optional[str], document: dict): ...
+`}
+
+
+
+| Parameter | Type | Description |
+|-------------------|------------------|-------------------------------------------------------------------------|
+| **key** | `str` | Unique ID under which document will be stored |
+| **change_vector** | `str` (optional) | Entity changeVector, used for concurrency checks (`None` to skip check) |
+| **document** | `dict` | The document to store |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/commands/documents/delete.mdx b/versioned_docs/version-7.1/client-api/commands/documents/delete.mdx
new file mode 100644
index 0000000000..1d868f5335
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/commands/documents/delete.mdx
@@ -0,0 +1,49 @@
+---
+title: "Delete Document Command"
+hide_table_of_contents: true
+sidebar_label: Delete Document
+sidebar_position: 2
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import DeleteCsharp from './_delete-csharp.mdx';
+import DeleteJava from './_delete-java.mdx';
+import DeletePython from './_delete-python.mdx';
+import DeletePhp from './_delete-php.mdx';
+import DeleteNodejs from './_delete-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "python", "php", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/commands/documents/get.mdx b/versioned_docs/version-7.1/client-api/commands/documents/get.mdx
new file mode 100644
index 0000000000..f1f38452a6
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/commands/documents/get.mdx
@@ -0,0 +1,49 @@
+---
+title: "Get Documents Command"
+hide_table_of_contents: true
+sidebar_label: Get Documents
+sidebar_position: 1
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import GetCsharp from './_get-csharp.mdx';
+import GetJava from './_get-java.mdx';
+import GetPython from './_get-python.mdx';
+import GetPhp from './_get-php.mdx';
+import GetNodejs from './_get-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "python", "php", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/commands/documents/put.mdx b/versioned_docs/version-7.1/client-api/commands/documents/put.mdx
new file mode 100644
index 0000000000..d11cce533f
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/commands/documents/put.mdx
@@ -0,0 +1,49 @@
+---
+title: "Put Document Command"
+hide_table_of_contents: true
+sidebar_label: Put Document
+sidebar_position: 0
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import PutCsharp from './_put-csharp.mdx';
+import PutJava from './_put-java.mdx';
+import PutPython from './_put-python.mdx';
+import PutPhp from './_put-php.mdx';
+import PutNodejs from './_put-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "python", "php", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/commands/overview.mdx b/versioned_docs/version-7.1/client-api/commands/overview.mdx
new file mode 100644
index 0000000000..edf2a9175d
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/commands/overview.mdx
@@ -0,0 +1,40 @@
+---
+title: "Commands Overview"
+hide_table_of_contents: true
+sidebar_label: Commands Overview
+sidebar_position: 0
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import OverviewCsharp from './_overview-csharp.mdx';
+import OverviewJava from './_overview-java.mdx';
+import OverviewNodejs from './_overview-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/configuration/_category_.json b/versioned_docs/version-7.1/client-api/configuration/_category_.json
new file mode 100644
index 0000000000..a20298e082
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/configuration/_category_.json
@@ -0,0 +1,4 @@
+{
+ "position": 13,
+ "label": Configuration,
+}
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/configuration/_conventions-csharp.mdx b/versioned_docs/version-7.1/client-api/configuration/_conventions-csharp.mdx
new file mode 100644
index 0000000000..5889ddc81c
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/configuration/_conventions-csharp.mdx
@@ -0,0 +1,1259 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* **Conventions** in RavenDB are customizable settings that users can configure to tailor client behaviors according to their preferences.
+
+* In this article:
+ * [How to set conventions](../../client-api/configuration/conventions.mdx#how-to-set-conventions)
+ * [Conventions:](../../client-api/configuration/conventions.mdx#conventions:)
+ [AddIdFieldToDynamicObjects](../../client-api/configuration/conventions.mdx#addidfieldtodynamicobjects)
+ [AggressiveCache.Duration](../../client-api/configuration/conventions.mdx#aggressivecacheduration)
+ [AggressiveCache.Mode](../../client-api/configuration/conventions.mdx#aggressivecachemode)
+ [AsyncDocumentIdGenerator](../../client-api/configuration/conventions.mdx#asyncdocumentidgenerator)
+ [CreateHttpClient](../../client-api/configuration/conventions.mdx#createhttpclient)
+ [DisableAtomicDocumentWritesInClusterWideTransaction](../../client-api/configuration/conventions.mdx#disableatomicdocumentwritesinclusterwidetransaction)
+ [DisableTcpCompression](../../client-api/configuration/conventions.mdx#disabletcpcompression)
+ [DisableTopologyCache](../../client-api/configuration/conventions.mdx#disabletopologycache)
+ [DisableTopologyUpdates](../../client-api/configuration/conventions.mdx#disabletopologyupdates)
+ [DisposeCertificate](../../client-api/configuration/conventions.mdx#disposecertificate)
+ [FindClrType](../../client-api/configuration/conventions.mdx#findclrtype)
+ [FindClrTypeName](../../client-api/configuration/conventions.mdx#findclrtypename)
+ [FindClrTypeNameForDynamic](../../client-api/configuration/conventions.mdx#findclrtypenamefordynamic)
+ [FindCollectionName](../../client-api/configuration/conventions.mdx#findcollectionname)
+ [FindCollectionNameForDynamic](../../client-api/configuration/conventions.mdx#findcollectionnamefordynamic)
+ [FindIdentityProperty](../../client-api/configuration/conventions.mdx#findidentityproperty)
+ [FindIdentityPropertyNameFromCollectionName](../../client-api/configuration/conventions.mdx#findidentitypropertynamefromcollectionname)
+ [FindProjectedPropertyNameForIndex](../../client-api/configuration/conventions.mdx#findprojectedpropertynameforindex)
+ [FindPropertyNameForDynamicIndex](../../client-api/configuration/conventions.mdx#findpropertynamefordynamicindex)
+ [FindPropertyNameForIndex](../../client-api/configuration/conventions.mdx#findpropertynameforindex)
+ [FirstBroadcastAttemptTimeout](../../client-api/configuration/conventions.mdx#firstbroadcastattempttimeout)
+ [HttpClientType](../../client-api/configuration/conventions.mdx#httpclienttype)
+ [HttpVersion](../../client-api/configuration/conventions.mdx#httpversion)
+ [IdentityPartsSeparator](../../client-api/configuration/conventions.mdx#identitypartsseparator)
+ [LoadBalanceBehavior](../../client-api/configuration/conventions.mdx#loadbalancebehavior)
+ [LoadBalancerContextSeed](../../client-api/configuration/conventions.mdx#loadbalancebehavior)
+ [LoadBalancerPerSessionContextSelector](../../client-api/configuration/conventions.mdx#loadbalancebehavior)
+ [MaxHttpCacheSize](../../client-api/configuration/conventions.mdx#maxhttpcachesize)
+ [MaxNumberOfRequestsPerSession](../../client-api/configuration/conventions.mdx#maxnumberofrequestspersession)
+ [Modify serialization of property name](../../client-api/configuration/conventions.mdx#modify-serialization-of-property-name)
+ [OperationStatusFetchMode](../../client-api/configuration/conventions.mdx#operationstatusfetchmode)
+ [PreserveDocumentPropertiesNotFoundOnModel](../../client-api/configuration/conventions.mdx#preservedocumentpropertiesnotfoundonmodel)
+ [ReadBalanceBehavior](../../client-api/configuration/conventions.mdx#readbalancebehavior)
+ [RequestTimeout](../../client-api/configuration/conventions.mdx#requesttimeout)
+ [ResolveTypeFromClrTypeName](../../client-api/configuration/conventions.mdx#resolvetypefromclrtypename)
+ [SaveEnumsAsIntegers](../../client-api/configuration/conventions.mdx#saveenumsasintegers)
+ [SecondBroadcastAttemptTimeout](../../client-api/configuration/conventions.mdx#secondbroadcastattempttimeout)
+ [SendApplicationIdentifier](../../client-api/configuration/conventions.mdx#sendapplicationidentifier)
+ [ShouldIgnoreEntityChanges](../../client-api/configuration/conventions.mdx#shouldignoreentitychanges)
+ [TopologyCacheLocation](../../client-api/configuration/conventions.mdx#topologycachelocation)
+ [TransformTypeCollectionNameToDocumentIdPrefix](../../client-api/configuration/conventions.mdx#transformtypecollectionnametodocumentidprefix)
+ [UseHttpCompression](../../client-api/configuration/conventions.mdx#usehttpcompression)
+ [UseHttpDecompression](../../client-api/configuration/conventions.mdx#usehttpdecompression)
+ [HttpCompressionAlgorithm](../../client-api/configuration/conventions.mdx#httpcompressionalgorithm)
+ [UseOptimisticConcurrency](../../client-api/configuration/conventions.mdx#useoptimisticconcurrency)
+ [WaitForIndexesAfterSaveChangesTimeout](../../client-api/configuration/conventions.mdx#waitforindexesaftersavechangestimeout)
+ [WaitForNonStaleResultsTimeout](../../client-api/configuration/conventions.mdx#waitfornonstaleresultstimeout)
+ [WaitForReplicationAfterSaveChangesTimeout](../../client-api/configuration/conventions.mdx#waitforreplicationaftersavechangestimeout)
+
+
+## How to set conventions
+
+* Access the conventions via the `Conventions` property of the `DocumentStore` object.
+
+* The conventions set on a Document Store will apply to ALL [sessions](../../client-api/session/what-is-a-session-and-how-does-it-work.mdx) and [operations](../../client-api/operations/what-are-operations.mdx) associated with that store.
+
+* Customizing the conventions can only be set **before** calling `DocumentStore.Initialize()`.
+ Trying to do so after calling _Initialize()_ will throw an exception.
+
+
+
+{`using (var store = new DocumentStore()
+\{
+ Conventions =
+ \{
+ // Set conventions HERE, e.g.:
+ MaxNumberOfRequestsPerSession = 50,
+ AddIdFieldToDynamicObjects = false
+ // ...
+ \}
+\}.Initialize())
+\{
+ // * Here you can interact with the RavenDB store:
+ // open sessions, create or query for documents, perform operations, etc.
+
+ // * Conventions CANNOT be set here after calling Initialize()
+\}
+`}
+
+
+
+
+
+## Conventions:
+
+
+
+#### AddIdFieldToDynamicObjects
+* Use the `AddIdFieldToDynamicObjects` convention to determine whether an `Id` field is automatically added
+ to [dynamic objects](https://learn.microsoft.com/en-us/dotnet/csharp/advanced-topics/interop/using-type-dynamic) when [storing new entities](../../client-api/session/storing-entities.mdx) via the session.
+
+* DEFAULT: `true`
+
+
+
+{`// Syntax:
+public bool AddIdFieldToDynamicObjects \{ get; set; \}
+`}
+
+
+
+
+
+
+#### AggressiveCache.Duration
+* Use the `AggressiveCache.Duration` convention to define the [aggressive cache](../../client-api/how-to/setup-aggressive-caching.mdx) duration period.
+
+* DEFAULT: `1 day`
+
+
+
+{`// Syntax:
+public TimeSpan Duration \{ get; set; \}
+`}
+
+
+
+
+
+
+#### AggressiveCache.Mode
+* Use the `AggressiveCache.Mode` convention to define the [aggressive cache](../../client-api/how-to/setup-aggressive-caching.mdx) mode.
+ (`AggressiveCacheMode.TrackChanges` or `AggressiveCacheMode.DoNotTrackChanges`)
+
+* DEFAULT: `AggressiveCacheMode.TrackChanges`
+
+
+
+{`// Syntax:
+public AggressiveCacheMode Mode \{ get; set; \}
+`}
+
+
+
+
+
+
+#### AsyncDocumentIdGenerator
+* Use the `AsyncDocumentIdGenerator` convention to define the document ID generator method
+ used when storing a document without explicitly specifying its `Id`.
+
+* You can override this global ID generator for specific object types using the [RegisterAsyncIdConvention](../../client-api/configuration/identifier-generation/type-specific.mdx) convention.
+
+* DEFAULT:
+ The default document ID generator is the `GenerateDocumentIdAsync` method, which is part of the `HiLoIdGenerator` object within the _DocumentStore_.
+ This method implements the [HiLo algorithm](../../client-api/document-identifiers/hilo-algorithm.mdx) to ensure efficient ID generation when storing a document without explicitly specifying its `Id`.
+
+
+
+{`// Customize ID generation for all collections
+AsyncDocumentIdGenerator = (database, obj) =>
+\{
+ var objectType = obj.GetType().Name; // e.g., Person, Order, etc.
+ var timestamp = DateTime.UtcNow.Ticks; // Get the current timestamp
+
+ // Format the ID as \{ObjectType\}/\{Ticks\}
+ var id = $"\{objectType\}/\{timestamp\}";
+
+ return Task.FromResult(id);
+\}
+`}
+
+
+
+
+{`// Syntax:
+public Func> AsyncDocumentIdGenerator \{ get; set; \}
+`}
+
+
+
+
+
+
+#### CreateHttpClient
+* Use the `CreateHttpClient` convention to modify the HTTP client your client application uses.
+
+* For example, implementing your own HTTP client can be useful when you'd like your clients to provide the server with tracing info.
+
+* If you override the default `CreateHttpClient` convention we advise that you also set the HTTP client type
+ correctly using the [HttpClientType](../../client-api/configuration/conventions.mdx#httpclienttype) convention.
+
+
+
+{`CreateHttpClient = handler =>
+\{
+ // Your HTTP client code here, e.g.:
+ var httpClient = new MyHttpClient(new HttpClientXRayTracingHandler(new HttpClientHandler()));
+ return httpClient;
+\}
+`}
+
+
+
+
+{`// Syntax:
+public Func CreateHttpClient \{ get; set; \}
+`}
+
+
+
+
+
+
+#### DisableAtomicDocumentWritesInClusterWideTransaction
+* EXPERT ONLY:
+ Use the `DisableAtomicDocumentWritesInClusterWideTransaction` convention to disable automatic
+ atomic writes with cluster write transactions.
+
+* When set to `true`, will only consider explicitly-added compare exchange values to validate cluster-wide transactions.
+
+* DEFAULT: `false`
+
+
+
+{`// Syntax:
+public bool? DisableAtomicDocumentWritesInClusterWideTransaction \{ get; set; \}
+`}
+
+
+
+
+
+
+#### DisableTcpCompression
+* When setting the `DisableTcpCompression` convention to `true`, TCP data will not be compressed.
+
+* DEFAULT: `false`
+
+
+
+{`// Syntax:
+public bool DisableTcpCompression \{ get; set; \}
+`}
+
+
+
+
+
+
+#### DisableTopologyCache
+* By default, the client caches the cluster's topology in `*.raven-cluster-topology` files on disk.
+ When all servers provided in the `DocumentStore.Urls` property are down or unavailable,
+ the client will load the topology from the latest file and try to connect to nodes that are not listed in the URL property.
+
+* This behavior can be disabled when setting the `DisableTopologyCache` convention to `true`.
+ In such a case:
+
+ * The client will not load the topology from the cache upon failing to connect to a server.
+ * Even if the client is configured to [receive topology updates](../../client-api/configuration/conventions.mdx#disabletopologyupdates) from the server,
+ no topology files will be saved on disk, thus preventing the accumulation of these files.
+
+* DEFAULT: `false`
+
+
+
+{`// Syntax:
+public bool DisableTopologyCache \{ get; set; \}
+`}
+
+
+
+
+
+
+#### DisableTopologyUpdates
+* When setting the `DisableTopologyUpdates` convention to `true`,
+ no database topology updates will be sent from the server to the client (e.g. adding or removing a node).
+
+* DEFAULT: `false`
+
+
+
+{`// Syntax:
+public bool DisableTopologyUpdates \{ get; set; \}
+`}
+
+
+
+
+
+
+#### DisposeCertificate
+* When setting the `DisposeCertificate` convention to `true`,
+ the `DocumentStore.Certificate` will be disposed of during DocumentStore disposal.
+
+* DEFAULT: `true`
+
+
+
+{`// Syntax:
+public bool DisposeCertificate \{ get; set; \}
+`}
+
+
+
+
+
+
+#### FindClrType
+* Use the `FindClrType` convention to define a function that finds the CLR type of a document.
+
+* DEFAULT:
+ The CLR type is retrieved from the `Raven-Clr-Type` property under the `@metadata` key in the document.
+
+
+
+{`// The default implementation is:
+FindClrType = (_, doc) =>
+\{
+ if (doc.TryGet(Constants.Documents.Metadata.Key, out BlittableJsonReaderObject metadata) &&
+ metadata.TryGet(Constants.Documents.Metadata.RavenClrType, out string clrType))
+ return clrType;
+
+ return null;
+\}
+`}
+
+
+
+
+{`// Syntax:
+public Func FindClrType \{ get; set; \}
+`}
+
+
+
+
+
+
+#### FindClrTypeName
+* Use the `FindClrTypeName` convention to define a function that returns the CLR type name from a given type.
+
+* DEFAULT: Return the entity's full name, including the assembly name.
+
+
+
+{`// Syntax:
+public Func FindClrTypeName \{ get; set; \}
+`}
+
+
+
+
+
+
+#### FindClrTypeNameForDynamic
+* Use the `FindClrTypeNameForDynamic` convention to define a function that returns the CLR type name
+ from a dynamic entity.
+
+* DEFAULT: The dynamic entity type is returned.
+
+
+
+{`// The dynamic entity's type is returned by default
+FindClrTypeNameForDynamic = dynamicEntity => dynamicEntity.GetType()
+`}
+
+
+
+
+{`// Syntax:
+public Func FindClrTypeNameForDynamic \{ get; set; \}
+`}
+
+
+
+
+
+
+#### FindCollectionName
+* Use the `FindCollectionName` convention to define a function that will customize
+ the collection name from a given type.
+
+* DEFAULT: The collection name will be the plural form of the type name.
+
+
+
+{`// Here the collection name will be the type name separated by dashes
+FindCollectionName = type => String.Join("-", type.Name.ToCharArray())
+`}
+
+
+
+
+{`// Syntax:
+public Func FindCollectionName \{ get; set; \}
+`}
+
+
+
+
+
+
+#### FindCollectionNameForDynamic
+* Use the `FindCollectionNameForDynamic` convention to define a function that will customize the
+ collection name from a dynamic type.
+
+* DEFAULT: The collection name will be the entity's type.
+
+
+
+{`// Here the collection name will be some property of the dynamic entity
+FindCollectionNameForDynamic = dynamicEntity => dynamicEntity.SomeProperty
+`}
+
+
+
+
+{`// Syntax:
+public Func FindCollectionNameForDynamic \{ get; set; \}
+`}
+
+
+
+
+
+
+#### FindIdentityProperty
+* Use the `FindIdentityProperty` convention to define a function that finds the specified ID property
+ in the entity.
+
+* DEFAULT: The entity's `Id` property serves as the ID property.
+
+
+
+{`// If there exists a property with name "CustomizedId" then it will be the entity's ID property
+FindIdentityProperty = memberInfo => memberInfo.Name == "CustomizedId"
+`}
+
+
+
+
+{`// Syntax:
+public Func FindIdentityProperty \{ get; set; \}
+`}
+
+
+
+
+
+
+#### FindIdentityPropertyNameFromCollectionName
+* Use the `FindIdentityPropertyNameFromCollectionName` convention to define a function that customizes
+ the entity's ID property from the collection name.
+
+* DEFAULT: Will use the `Id` property.
+
+
+
+{`// Will use property "CustomizedId" as the ID property
+FindIdentityPropertyNameFromCollectionName = collectionName => "CustomizedId"
+`}
+
+
+
+
+{`// Syntax:
+public Func FindIdentityPropertyNameFromCollectionName \{ get; set; \}
+`}
+
+
+
+
+
+
+#### FindProjectedPropertyNameForIndex
+* Use the `FindProjectedPropertyNameForIndex` convention to define a function that customizes the
+ **projected** field names that will be used in the RQL generated by the client and sent to the server when querying a static index.
+
+* This can be useful when projecting **nested properties** that are not [Stored in the index](../../indexes/storing-data-in-index.mdx).
+
+* The function receives the following input:
+ the index type, the index name, the current path, and the property path that is used in the query.
+
+* DEFAULT: `null`
+ When `FindProjectedPropertyNameForIndex` is set to `null` (or returns `null`),
+ the [FindPropertyNameForIndex](../../client-api/configuration/conventions.mdx#findpropertynameforindex) convention is used instead.
+**Example**:
+Consider the following index, which indexes the nested `School.Id` property from _Student_ documents:
+
+
+
+
+{`public class Students_BySchoolId : AbstractIndexCreationTask
+{
+ public class IndexEntry
+ {
+ public string Name { get; set; }
+ public string SchoolId { get; set; }
+ }
+
+ public Students_BySchoolId()
+ {
+ Map = students => from student in students
+ select new IndexEntry
+ {
+ Name = student.StudentName,
+ SchoolId = student.School.Id // index nested property
+ };
+ }
+}
+`}
+
+
+
+
+{`public class Student
+{
+ public string StudentName { get; set; }
+ public School School { get; set; }
+ // ... other student properties
+}
+
+public class School
+{
+ public string SchoolName { get; set; }
+ public string Id { get; set; }
+}
+`}
+
+
+
+
+When querying the index and projecting fields from the matching _Student_ documents,
+if the `FindProjectedPropertyNameForIndex` convention is Not set,
+the client will use the [FindPropertyNameForIndex](../../client-api/configuration/conventions.mdx#findpropertynameforindex) convention instead when constructing the RQL sent to the server.
+
+This results in the following RQL query:
+(Note that while the high-level query uses `.Select(student => student.School.Id)`,
+the RQL sent to the server contains `School_Id`)
+
+
+
+
+{`// Query the index
+var query = session.Query()
+ .Where(x => x.Name == "someStudentName")
+ .OfType()
+ // Project only the School.Id property from the Student document in the results
+ .Select(student => student.School.Id)
+ .ToList();
+`}
+
+
+
+
+{`from index 'Students/BySchoolId'
+where Name == "someStudentName"
+select School_Id
+// Since the FindProjectedPropertyNameForIndex convention was not yet defined,
+// the 'School_Id' property name was generated using the FindPropertyNameForIndex convention.
+// ('School.Id' was converted to 'School_Id')
+`}
+
+
+
+
+The RQL generated by the above query projects the `School_Id` field, so the server first attempts to fetch this property from the [Stored index fields](../../indexes/storing-data-in-index.mdx)
+(this is the default behavior, learn more in [Projection behavior with a static-index](../../indexes/querying/projections.mdx#projection-behavior-with-a-static-index)).
+
+However, because this property is Not stored in the index, the server then tries to retrieve it from the _Student_ document instead.
+But the document does not contain a flat `School_Id` field — it contains the nested property `School.Id`,
+and so no results are returned for the `School_Id` field.
+
+To resolve this issue,
+set the `FindProjectedPropertyNameForIndex` convention to return the nested property name that the client should use when constructing the RQL query sent to the server:
+
+
+
+{`FindProjectedPropertyNameForIndex = (indexedType, indexName, path, prop) => path + prop
+`}
+
+
+
+Now, when using the same query, the RQL sent to the server will contain the nested `School.Id` property name,
+and the query will return results:
+
+
+
+
+{`// Query the index
+var query = session.Query()
+ .Where(x => x.Name == "someStudentName")
+ .OfType()
+ // Project only the School.Id property from the Student document in the results
+ .Select(student => student.School.Id)
+ .ToList();
+`}
+
+
+
+
+{`from index 'Students/BySchoolId'
+where Name == "someStudentName"
+select School.Id
+// The RQL sent to the server now contains 'School.Id',
+// as defined by the FindProjectedPropertyNameForIndex convention.
+`}
+
+
+
+
+
+{`// Syntax:
+public Func FindProjectedPropertyNameForIndex \{ get; set; \}
+`}
+
+
+
+
+
+
+#### FindPropertyNameForDynamicIndex
+* Use the `FindPropertyNameForDynamicIndex` convention to define a function that customizes the
+ property name that will be used in the RQL sent to the server when making a dynamic query.
+
+* The function receives the following input:
+ the index type, the index name, the current path, and the property path that is used in the query predicate.
+
+
+
+{`// The DEFAULT function:
+FindPropertyNameForDynamicIndex = (Type indexedType, string indexedName, string path, string prop) =>
+ path + prop
+`}
+
+
+
+
+{`// Syntax:
+public Func FindPropertyNameForDynamicIndex \{ get; set; \}
+`}
+
+
+
+
+
+
+#### FindPropertyNameForIndex
+* Use the `FindPropertyNameForIndex` convention to define a function that customizes the name of the
+ index-field property that will be used in the RQL sent to the server when querying a static index.
+
+* The function receives the following input:
+ the index type, the index name, the current path, and the property path that is used in the query predicate.
+
+* DEFAULT: `[].` & `.` are replaced by `_`
+
+
+
+{`// The DEFAULT function:
+FindPropertyNameForIndex = (Type indexedType, string indexedName, string path, string prop) =>
+ (path + prop).Replace("[].", "_").Replace(".", "_")
+`}
+
+
+
+
+{`// Syntax:
+public Func FindPropertyNameForIndex \{ get; set; \}
+`}
+
+
+
+
+
+
+#### FirstBroadcastAttemptTimeout
+* Use the `FirstBroadcastAttemptTimeout` convention to set the timeout for the first broadcast attempt.
+
+* In the first attempt, the request executor will send a single request to the selected node.
+ Learn about the "selected node" in: [Client logic for choosing a node](../../client-api/configuration/load-balance/overview.mdx#client-logic-for-choosing-a-node).
+
+* A [second attempt](../../client-api/configuration/conventions.mdx#secondbroadcastattempttimeout) will be held upon failure.
+
+* DEFAULT: `5 seconds`
+
+
+
+{`FirstBroadcastAttemptTimeout = TimeSpan.FromSeconds(10)
+`}
+
+
+
+
+{`// Syntax:
+public TimeSpan FirstBroadcastAttemptTimeout \{ get; set; \}
+`}
+
+
+
+
+
+
+#### HttpClientType
+* Use the `HttpClientType` convention to set the type of HTTP client you're using.
+
+* RavenDB uses the HTTP type internally to manage its cache.
+
+* If you override the [CreateHttpClient](../../client-api/configuration/conventions.mdx#createhttpclient) convention to use a non-default HTTP client,
+ we advise that you also set `HttpClientType` so it returns the client type you are actually using.
+
+
+
+{`// The type of HTTP client you are using
+HttpClientType = typeof(MyHttpClient)
+`}
+
+
+
+
+{`// Syntax:
+public Type HttpClientType \{ get; set; \}
+`}
+
+
+
+
+
+
+#### HttpVersion
+* Use the `HttpVersion` convention to set the Http version the client will use when communicating
+ with the server.
+
+* DEFAULT:
+ * When this convention is explicitly set to `null`, the default HTTP version provided by your .NET framework is used.
+ * Otherwise, the default HTTP version is set to `System.Net.HttpVersion.Version20` (HTTP 2.0).
+
+
+
+{`// Syntax:
+public Version HttpVersion \{ get; set; \}
+`}
+
+
+
+
+
+
+#### IdentityPartsSeparator
+* Use the `IdentityPartsSeparator` convention to customize the **default ID separator** for document IDs generated automatically by the
+ [HiLo algorithm](../../client-api/document-identifiers/hilo-algorithm).
+
+* The value can be any char except `|` (pipe), which is reserved for identity IDs.
+
+* DEFAULT: `/` (forward slash)
+
+* Applies only to: [HiLo IDs](../../server/kb/document-identifier-generation.mdx#hilo-algorithm-id).
+
+
+
+{`// Syntax:
+public char IdentityPartsSeparator \{ get; set; \}
+`}
+
+
+
+
+
+
+#### LoadBalanceBehavior
+#### LoadBalancerPerSessionContextSelector
+#### LoadBalancerContextSeed
+* Configure the **load balance behavior** by setting the following conventions:
+ * `LoadBalanceBehavior`
+ * `LoadBalancerPerSessionContextSelector`
+ * `LoadBalancerContextSeed`
+
+* Learn more in the dedicated [Load balance behavior](../../client-api/configuration/load-balance/load-balance-behavior.mdx) article.
+
+
+
+
+#### MaxHttpCacheSize
+* Use the `MaxHttpCacheSize` convention to set the maximum HTTP cache size.
+ This setting will affect all the databases accessed by the Document Store.
+
+* DEFAULT:
+
+ | System | Usable Memory | Default Value |
+ |----------|-------------------------------------------------------------------------------------------------------|----------------------------|
+ | 64-bit | Lower than or equal to 3GB Greater than 3GB and Lower than or equal to 6GB Greater than 6GB | 64MB 128MB 512MB |
+ | 32-bit | | 32MB |
+
+* **Disabling Caching**:
+
+ * To disable caching globally, set `MaxHttpCacheSize` to zero.
+ * To disable caching per session, see: [Disable caching per session](../../client-api/session/configuration/how-to-disable-caching.mdx).
+
+* Note: RavenDB also supports Aggressive Caching.
+ Learn more about this in the [Setup aggressive caching](../../client-api/how-to/setup-aggressive-caching.mdx) article.
+
+
+
+{`MaxHttpCacheSize = new Size(256, SizeUnit.Megabytes) // Set max cache size
+`}
+
+
+
+
+{`MaxHttpCacheSize = new Size(0, SizeUnit.Megabytes) // Disable caching
+`}
+
+
+
+
+{`// Syntax:
+public Size MaxHttpCacheSize \{ get; set; \}
+`}
+
+
+
+
+
+
+#### MaxNumberOfRequestsPerSession
+* Use the `MaxNumberOfRequestsPerSession` convention to set the maximum number of requests per session.
+
+* DEFAULT: `30`
+
+
+
+{`// Syntax:
+public int MaxNumberOfRequestsPerSession \{ get; set; \}
+`}
+
+
+
+
+
+
+#### Modify serialization of property name
+* Different clients use different casing conventions for entity field names. For example:
+
+ | Language | Default casing | Example |
+ |------------|-----------------|------------|
+ | C# | PascalCase | OrderLines |
+ | Java | camelCase | orderLines |
+ | JavaScript | camelCase | orderLines |
+
+* By default, when saving an entity, the naming convention used by the client is reflected in the JSON document properties on the server-side.
+ This default serialization behavior can be customized to facilitate language interoperability.
+
+* **Example**:
+
+ Set `CustomizeJsonSerializer` and `PropertyNameConverter` to serialize an entity's properties as camelCase from a C# client:
+
+
+
+{`Serialization = new NewtonsoftJsonSerializationConventions
+\{
+ // .Net properties will be serialized as camelCase in the JSON document when storing an entity
+ // and deserialized back to PascalCase
+ CustomizeJsonSerializer = s => s.ContractResolver = new CamelCasePropertyNamesContractResolver()
+\},
+
+// In addition, the following convention is required when
+// making a query that filters by a field name and when indexing.
+PropertyNameConverter = memberInfo => FirstCharToLower(memberInfo.Name)
+`}
+
+
+
+
+{`private string FirstCharToLower(string str) => $"\{Char.ToLower(str[0])\}\{str.Substring(1)\}";
+`}
+
+
+
+
+{`// Syntax:
+public ISerializationConventions Serialization \{ get; set; \}
+`}
+
+
+
+
+
+
+#### OperationStatusFetchMode
+* Use the `OperationStatusFetchMode` convention to set the way an [operation](../../client-api/operations/what-are-operations.mdx) is getting its status when [waiting for completion](../../client-api/operations/what-are-operations.mdx#wait-for-completion).
+
+* DEFAULT:
+ By default, the value is set to `ChangesApi` which uses the WebSocket protocol underneath when a connection is established with the server.
+
+* On some older systems like Windows 7 the WebSocket protocol might not be available due to the OS and .NET Framework limitations.
+ To bypass this issue, the value can be changed to `Polling`.
+
+
+
+{`OperationStatusFetchMode = OperationStatusFetchMode.ChangesApi // ChangesApi | Polling
+`}
+
+
+
+
+{`// Syntax:
+public OperationStatusFetchMode OperationStatusFetchMode \{ get; set; \}
+`}
+
+
+
+
+
+
+#### PreserveDocumentPropertiesNotFoundOnModel
+* Loading a document using a different model will result in the removal of the missing model properties
+ from the loaded entity, and no exception is thrown.
+
+* Setting the `PreserveDocumentPropertiesNotFoundOnModel` convention to `true`
+ allows the client to check (via [whatChanged](../../client-api/session/how-to/check-if-there-are-any-changes-on-a-session.mdx#get-session-changes)
+ or via [WhatChangedFor](../../client-api/session/how-to/check-if-entity-has-changed.mdx#get-entity-changes) methods)
+ for the missing properties on the entity after loading the document.
+
+* DEFAULT: `true`
+
+
+
+{`// Syntax:
+public bool PreserveDocumentPropertiesNotFoundOnModel \{ get; set; \}
+`}
+
+
+
+
+
+
+#### ReadBalanceBehavior
+* Configure the **read request behavior** by setting the `ReadBalanceBehavior` convention.
+
+* Learn more in the dedicated [Read balance behavior](../../client-api/configuration/load-balance/read-balance-behavior.mdx) article.
+
+
+
+
+#### RequestTimeout
+* Use the `RequestTimeout` convention to define the global request timeout value for all `RequestExecutors` created per database.
+
+* DEFAULT: `null` (the default HTTP client timeout will be applied - 12h)
+
+
+
+{`RequestTimeout = TimeSpan.FromSeconds(90)
+`}
+
+
+
+
+{`// Syntax:
+public TimeSpan? RequestTimeout \{ get; set; \}
+`}
+
+
+
+
+
+
+#### ResolveTypeFromClrTypeName
+* Use the `ResolveTypeFromClrTypeName` convention to define a function that resolves the CLR type
+ from the CLR type name.
+
+* DEFAULT: The type is returned.
+
+
+
+{`// The type itself is returned by default
+ResolveTypeFromClrTypeName = clrType => clrType.GetType()
+`}
+
+
+
+
+{`// Syntax:
+public Func ResolveTypeFromClrTypeName \{ get; set; \}
+`}
+
+
+
+
+
+
+#### SaveEnumsAsIntegers
+* When setting the `SaveEnumsAsIntegers` convention to `true`,
+ C# `enum` types will be stored and queried as integers, rather than their string representations.
+
+* DEFAULT: `false` (save as strings)
+
+
+
+{`// Syntax:
+public bool SaveEnumsAsIntegers \{ get; set; \}
+`}
+
+
+
+
+
+
+#### SecondBroadcastAttemptTimeout
+* Use the `SecondBroadcastAttemptTimeout` convention to set the timeout for the second broadcast attempt.
+
+* Upon failure of the [first attempt](../../client-api/configuration/conventions.mdx#firstbroadcastattempttimeout) the request executor will resend the command to all nodes simultaneously.
+
+* DEFAULT: `30 seconds`
+
+
+
+{`SecondBroadcastAttemptTimeout = TimeSpan.FromSeconds(20)
+`}
+
+
+
+
+{`public TimeSpan SecondBroadcastAttemptTimeout \{ get; set; \}
+`}
+
+
+
+
+
+
+#### SendApplicationIdentifier
+* Use the `SendApplicationIdentifier` convention to `true` to enable sending a unique application identifier to the RavenDB Server.
+
+* Setting to _true_ allows the server to issue performance hint notifications to the client,
+ e.g. during robust topology update requests which could indicate a Client API misuse impacting the overall performance.
+
+* DEFAULT: `true`
+
+
+
+{`// Syntax:
+public bool SendApplicationIdentifier \{ get; set; \}
+`}
+
+
+
+
+
+
+#### ShouldIgnoreEntityChanges
+* Set the `ShouldIgnoreEntityChanges` convention to disable entity tracking for certain entities.
+
+* Learn more in [Customize tracking in conventions](../../client-api/session/configuration/how-to-disable-tracking.mdx#customize-tracking-in-conventions).
+
+
+
+
+#### TopologyCacheLocation
+* Use the `TopologyCacheLocation` convention to change the location of the topology cache files
+ (`*.raven-database-topology` & `*.raven-cluster-topology`).
+
+* Directory existence and writing permissions will be checked when setting this value.
+
+* DEFAULT: `AppContext.BaseDirectory` (The application's base directory)
+
+
+
+{`TopologyCacheLocation = @"C:\\RavenDB\\TopologyCache"
+`}
+
+
+
+
+{`// Syntax:
+public string TopologyCacheLocation \{ get; set; \}
+`}
+
+
+
+
+
+
+#### TransformTypeCollectionNameToDocumentIdPrefix
+* Use the `TransformTypeCollectionNameToDocumentIdPrefix` convention to define a function that will
+ customize the document ID prefix from the collection name.
+
+* DEFAULT:
+ By default, the document id prefix is determined as follows:
+
+| Number of uppercase letters in collection name | Document ID prefix |
+|--------------------------------------------------|-------------------------------------------------------------|
+| `<= 1` | Use the collection name with all lowercase letters |
+| `> 1` | Use the collection name as is, preserving the original case |
+
+
+
+{`// Syntax:
+public Func TransformTypeCollectionNameToDocumentIdPrefix \{ get; set; \}
+`}
+
+
+
+
+
+
+#### UseHttpCompression
+* When setting the `UseHttpCompression` convention to `true`,
+ then `Gzip` compression will be used when sending content of HTTP request.
+
+* When the convention is set to `false`, content will not be compressed.
+
+* DEFAULT: `true`
+
+
+
+{`// Syntax:
+public bool UseHttpCompression \{ get; set; \}
+`}
+
+
+
+
+
+
+#### UseHttpDecompression
+* When setting the `UseHttpDecompression` convention to `true`,
+ the client can accept compressed HTTP response content and will use zstd/gzip/deflate decompression methods.
+
+* DEFAULT: `true`
+
+
+
+{`// Syntax:
+public bool UseHttpDecompression \{ get; set; \}
+`}
+
+
+
+
+
+
+
+#### HttpCompressionAlgorithm
+* Use this convention to set the HTTP compression algorithm
+ (see [UseHttpDecompression](../../client-api/configuration/conventions.mdx#usehttpcompression) above).
+
+* DEFAULT: `Zstd`
+
+ In RavenDB versions up to `6.2`, HTTP compression is set to `Gzip` by default.
+ In RavenDB versions from `7.0` on, the default has changed and is now `Zstd`.
+
+
+
+
+{`// Syntax:
+public HttpCompressionAlgorithm HttpCompressionAlgorithm \{ get; set; \}
+`}
+
+
+
+
+
+
+
+
+#### UseOptimisticConcurrency
+* When setting the `UseOptimisticConcurrency` convention to `true`,
+ Optimistic Concurrency checks will be applied for all sessions opened from the Document Store.
+
+* Learn more about Optimistic Concurrency and the various ways to enable it in the
+ [how to enable optimistic concurrency](../../client-api/session/configuration/how-to-enable-optimistic-concurrency.mdx)
+ article.
+
+* DEFAULT: `false`
+
+
+
+{`// Syntax:
+public bool UseOptimisticConcurrency \{ get; set; \}
+`}
+
+
+
+
+
+
+#### WaitForIndexesAfterSaveChangesTimeout
+* Use the `WaitForIndexesAfterSaveChangesTimeout` convention to set the default timeout for the
+ `DocumentSession.Advanced.WaitForIndexesAfterSaveChanges` method.
+
+* DEFAULT: 15 Seconds
+
+
+
+{`WaitForIndexesAfterSaveChangesTimeout = TimeSpan.FromSeconds(10)
+`}
+
+
+
+
+{`// Syntax:
+public TimeSpan WaitForIndexesAfterSaveChangesTimeout \{ get; set; \}
+`}
+
+
+
+
+
+
+#### WaitForNonStaleResultsTimeout
+* Use the `WaitForNonStaleResultsTimeout` convention to set the default timeout used by the
+ `WaitForNonStaleResults` method when querying.
+
+* DEFAULT: 15 Seconds
+
+
+
+{`WaitForNonStaleResultsTimeout = TimeSpan.FromSeconds(10)
+`}
+
+
+
+
+{`// Syntax:
+public TimeSpan WaitForNonStaleResultsTimeout \{ get; set; \}
+`}
+
+
+
+
+
+
+#### WaitForReplicationAfterSaveChangesTimeout
+* Use the `WaitForReplicationAfterSaveChangesTimeout` convention to set the default timeout for the
+ `DocumentSession.Advanced.WaitForReplicationAfterSaveChanges`method.
+
+* DEFAULT: 15 Seconds
+
+
+
+{`WaitForReplicationAfterSaveChangesTimeout = TimeSpan.FromSeconds(10)
+`}
+
+
+
+
+{`// Syntax:
+public TimeSpan WaitForReplicationAfterSaveChangesTimeout \{ get; set; \}
+`}
+
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/configuration/_conventions-nodejs.mdx b/versioned_docs/version-7.1/client-api/configuration/_conventions-nodejs.mdx
new file mode 100644
index 0000000000..f5b81fa38a
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/configuration/_conventions-nodejs.mdx
@@ -0,0 +1,599 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* **Conventions** in RavenDB are customizable settings that users can configure to tailor client behaviors according to their preferences.
+
+* In this article:
+ * [How to set conventions](../../client-api/configuration/conventions.mdx#how-to-set-conventions)
+ * [Conventions:](../../client-api/configuration/conventions.mdx#conventions:)
+ [customFetch](../../client-api/configuration/conventions.mdx#customfetch)
+ [disableAtomicDocumentWritesInClusterWideTransaction](../../client-api/configuration/conventions.mdx#disableatomicdocumentwritesinclusterwidetransaction)
+ [disableTopologyUpdates](../../client-api/configuration/conventions.mdx#disabletopologyupdates)
+ [findCollectionName](../../client-api/configuration/conventions.mdx#findcollectionname)
+ [findJsType](../../client-api/configuration/conventions.mdx#_findjstype)
+ [findJsTypeName](../../client-api/configuration/conventions.mdx#_findjstypename)
+ [firstBroadcastAttemptTimeout](../../client-api/configuration/conventions.mdx#firstbroadcastattempttimeout)
+ [identityPartsSeparator](../../client-api/configuration/conventions.mdx#identitypartsseparator)
+ [loadBalanceBehavior](../../client-api/configuration/conventions.mdx#loadbalancebehavior)
+ [loadBalancerContextSeed](../../client-api/configuration/conventions.mdx#loadbalancebehavior)
+ [loadBalancerPerSessionContextSelector](../../client-api/configuration/conventions.mdx#loadbalancebehavior)
+ [maxHttpCacheSize](../../client-api/configuration/conventions.mdx#maxhttpcachesize)
+ [maxNumberOfRequestsPerSession](../../client-api/configuration/conventions.mdx#maxnumberofrequestspersession)
+ [readBalanceBehavior](../../client-api/configuration/conventions.mdx#readbalancebehavior)
+ [requestTimeout](../../client-api/configuration/conventions.mdx#requesttimeout)
+ [secondBroadcastAttemptTimeout](../../client-api/configuration/conventions.mdx#secondbroadcastattempttimeout)
+ [sendApplicationIdentifier](../../client-api/configuration/conventions.mdx#sendapplicationidentifier)
+ [shouldIgnoreEntityChanges](../../client-api/configuration/conventions.mdx#shouldignoreentitychanges)
+ [storeDatesInUtc](../../client-api/configuration/conventions.mdx#storedatesinutc)
+ [storeDatesWithTimezoneInfo](../../client-api/configuration/conventions.mdx#storedateswithtimezoneinfo)
+ [syncJsonParseLimit](../../client-api/configuration/conventions.mdx#syncjsonparselimit)
+ [throwIfQueryPageSizeIsNotSet](../../client-api/configuration/conventions.mdx#throwifquerypagesizeisnotset)
+ [transformClassCollectionNameToDocumentIdPrefix](../../client-api/configuration/conventions.mdx#transformclasscollectionnametodocumentidprefix)
+ [useCompression](../../client-api/configuration/conventions.mdx#usecompression)
+ [useJsonlStreaming](../../client-api/configuration/conventions.mdx#usejsonlstreaming)
+ [useOptimisticConcurrency](../../client-api/configuration/conventions.mdx#useoptimisticconcurrency)
+ [waitForIndexesAfterSaveChangesTimeout](../../client-api/configuration/conventions.mdx#waitforindexesaftersavechangestimeout)
+ [waitForNonStaleResultsTimeout](../../client-api/configuration/conventions.mdx#waitfornonstaleresultstimeout)
+ [waitForReplicationAfterSaveChangesTimeout](../../client-api/configuration/conventions.mdx#waitforreplicationaftersavechangestimeout)
+
+
+## How to set conventions
+
+* Access the conventions via the `conventions` property of the `DocumentStore` object.
+
+* The conventions set on a Document Store will apply to ALL [sessions](../../client-api/session/what-is-a-session-and-how-does-it-work.mdx) and [operations](../../client-api/operations/what-are-operations.mdx) associated with that store.
+
+* Customizing the conventions can only be set **before** calling `documentStore.initialize()`.
+ Trying to do so after calling _initialize()_ will throw an exception.
+
+
+
+{`const documentStore = new DocumentStore(["serverUrl_1", "serverUrl_2", "..."], "DefaultDB");
+
+// Set conventions HERE, e.g.:
+documentStore.conventions.maxNumberOfRequestsPerSession = 50;
+documentStore.conventions.disableTopologyUpdates = true;
+
+documentStore.initialize();
+
+// * Here you can interact with the RavenDB store:
+// open sessions, create or query for documents, perform operations, etc.
+
+// * Conventions CANNOT be set here after calling initialize()
+`}
+
+
+
+
+
+## Conventions:
+
+
+
+#### customFetch
+* Use the `customFetch` convention to override the default _fetch_ method.
+ This method is useful to enable RavenDB Node.js client on CloudFlare Workers.
+
+* DEFAULT: undefined
+
+
+
+{`// Returns an object
+get customFetch();
+// Set an object bound to worker with type: mtls_certificate
+set customFetch(customFetch);
+`}
+
+
+
+
+
+
+#### disableAtomicDocumentWritesInClusterWideTransaction
+* EXPERT ONLY:
+ Use the `disableAtomicDocumentWritesInClusterWideTransaction` convention to disable automatic
+ atomic writes with cluster write transactions.
+
+* When set to `true`, will only consider explicitly added compare exchange values to validate cluster wide transactions.
+
+* DEFAULT: `false`
+
+
+
+{`// Returns a boolean value
+get disableAtomicDocumentWritesInClusterWideTransaction();
+// Set a boolean value
+set disableAtomicDocumentWritesInClusterWideTransaction(
+ disableAtomicDocumentWritesInClusterWideTransaction
+);
+`}
+
+
+
+
+
+
+#### disableTopologyUpdates
+* When setting the `disableTopologyUpdates` convention to `true`,
+ no database topology updates will be sent from the server to the client (e.g. adding or removing a node).
+
+* DEFAULT: `false`
+
+
+
+{`// Returns a boolean value
+get disableTopologyUpdates();
+// Set a boolean value
+set disableTopologyUpdates(value);
+`}
+
+
+
+
+
+
+#### findCollectionName
+* Use the `findCollectionName` convention to define a function that will customize the collection name
+ from given type.
+
+* DEFAULT: The collection name will be the plural form of the type name.
+
+
+
+{`// Returns a method
+get findCollectionName();
+// Set a method
+set findCollectionName(value);
+`}
+
+
+
+
+
+
+#### findJsType
+* Use the `findJsType` convention to define a function that finds the class of a document (if exists).
+
+* The type is retrieved from the `Raven-Node-Type` property under the `@metadata` key in the document.
+
+* DEFAULT: `null`
+
+
+
+{`// Returns a method
+get findJsType();
+// Set a method
+set findJsType(value);
+`}
+
+
+
+
+
+
+#### findJsTypeName
+* Use the `findJsTypeName` convention to define a function that returns the class type name from a given type.
+
+* The class name will be stored in the entity metadata.
+
+* DEFAULT: `null`
+
+
+
+{`// Returns a method
+get findJsTypeName();
+// Set a method
+set findJsTypeName(value);
+`}
+
+
+
+
+
+
+#### firstBroadcastAttemptTimeout
+* Use the `firstBroadcastAttemptTimeout` convention to set the timeout for the first broadcast attempt.
+
+* In the first attempt, the request executor will send a single request to the selected node.
+ Learn about the "selected node" in: [Client logic for choosing a node](../../client-api/configuration/load-balance/overview.mdx#client-logic-for-choosing-a-node).
+
+* A [second attempt](../../client-api/configuration/conventions.mdx#secondbroadcastattempttimeout) will be held upon failure.
+
+* DEFAULT: `5 seconds`
+
+
+
+{`// Returns a number
+get firstBroadcastAttemptTimeout();
+// Set a number
+set firstBroadcastAttemptTimeout(firstBroadcastAttemptTimeout);
+`}
+
+
+
+
+
+
+#### identityPartsSeparator
+* Use the `identityPartsSeparator` convention to customize the **default ID separator** for document IDs generated automatically by the
+ [HiLo algorithm](../../client-api/document-identifiers/hilo-algorithm).
+
+* The value can be any char except `|` (pipe), which is reserved for identity IDs.
+
+* DEFAULT: `/` (forward slash)
+
+* Applies only to: [HiLo IDs](../../server/kb/document-identifier-generation.mdx#hilo-algorithm-id).
+
+
+
+{`// Returns a string
+get identityPartsSeparator();
+// Set a string
+set identityPartsSeparator(value);
+`}
+
+
+
+
+
+
+#### loadBalanceBehavior
+#### loadBalancerPerSessionContextSelector
+#### loadBalancerContextSeed
+* Configure the **load balance behavior** by setting the following conventions:
+ * `loadBalanceBehavior`
+ * `loadBalancerPerSessionContextSelector`
+ * `loadBalancerContextSeed`
+
+* Learn more in the dedicated [Load balance behavior](../../client-api/configuration/load-balance/load-balance-behavior.mdx) article.
+
+
+
+
+#### maxHttpCacheSize
+* Use the `MaxHttpCacheSize` convention to set the maximum HTTP cache size.
+ This setting will affect all the databases accessed by the Document Store.
+
+* DEFAULT: `128 MB`
+
+* **Disabling Caching**:
+
+ * To disable caching globally, set `MaxHttpCacheSize` to zero.
+ * To disable caching per session, see: [Disable caching per session](../../client-api/session/configuration/how-to-disable-caching.mdx).
+
+* Note: RavenDB also supports Aggressive Caching.
+ Learn more about that in article [Setup aggressive caching](../../client-api/how-to/setup-aggressive-caching.mdx).
+
+
+
+{`// Returns a number
+get maxHttpCacheSize();
+// Set a number
+set maxHttpCacheSize(value);
+`}
+
+
+
+
+
+
+#### maxNumberOfRequestsPerSession
+* Use the `maxNumberOfRequestsPerSession` convention to set the maximum number of requests per session.
+
+* DEFAULT: `30`
+
+
+
+{`// Returns a number
+get maxNumberOfRequestsPerSession();
+// Set a number
+set maxNumberOfRequestsPerSession(value);
+`}
+
+
+
+
+
+
+#### readBalanceBehavior
+* Configure the **read request behavior** by setting the `readBalanceBehavior` convention.
+
+* Learn more in the dedicated [Read balance behavior](../../client-api/configuration/load-balance/read-balance-behavior.mdx) article.
+
+
+
+
+#### requestTimeout
+* Use the `requestTimeout` convention to define the global request timeout value for all `RequestExecutors` created per database.
+
+* DEFAULT: `null` (the default HTTP client timout will be applied - 12h)
+
+
+
+{`// Returns a number
+get requestTimeout();
+// Set a number
+set requestTimeout(value);
+`}
+
+
+
+
+
+
+#### secondBroadcastAttemptTimeout
+* Use the `secondBroadcastAttemptTimeout` convention to set the timeout for the second broadcast attempt.
+
+* Upon failure of the [first attempt](../../client-api/configuration/conventions.mdx#firstbroadcastattempttimeout) the request executor will resend the command to all nodes simultaneously.
+
+* DEFAULT: `30 seconds`
+
+
+
+{`// Returns a number
+get secondBroadcastAttemptTimeout();
+// Set a number
+set secondBroadcastAttemptTimeout(timeout);
+`}
+
+
+
+
+
+
+#### sendApplicationIdentifier
+* Use the `sendApplicationIdentifier` convention to `true` to enable sending a unique application identifier to the RavenDB Server.
+
+* Setting to _true_ allows the server to issue performance hint notifications to the client,
+ e.g. during robust topology update requests which could indicate a Client API misuse impacting the overall performance.
+
+* DEFAULT: `true`
+
+
+
+{`// Returns a boolean
+get sendApplicationIdentifier();
+// Set a boolean
+set sendApplicationIdentifier(sendApplicationIdentifier)
+`}
+
+
+
+
+
+
+#### shouldIgnoreEntityChanges
+* Set the `shouldIgnoreEntityChanges` convention to disable entity tracking for certain entities.
+
+* Learn more in [Customize tracking in conventions](../../client-api/session/configuration/how-to-disable-tracking.mdx#customize-tracking-in-conventions).
+
+
+
+
+#### storeDatesInUtc
+* When setting the `storeDatesInUtc` convention to `true`,
+ DateTime values will be stored in the database in UTC format.
+
+* DEFAULT: `false`
+
+
+
+{`// Returns a boolean
+get storeDatesInUtc();
+// Set a boolean
+set storeDatesInUtc(value);
+`}
+
+
+
+
+
+
+#### storeDatesWithTimezoneInfo
+* When setting the `storeDatesWithTimezoneInfo` to `true`,
+ DateTime values will be stored in the database with their time zone information included.
+
+* DEFAULT: `false`
+
+
+
+{`// Returns a boolean
+get storeDatesWithTimezoneInfo();
+// Set a boolean
+set storeDatesWithTimezoneInfo(value);
+`}
+
+
+
+
+
+
+#### syncJsonParseLimit
+* Use the `syncJsonParseLimit` convention to define the maximum size for the _sync_ parsing of the JSON data responses received from the server.
+ For data exceeding this size, the client switches to _async_ parsing.
+
+* DEFAULT: `2 * 1_024 * 1_024`
+
+
+
+{`// Returns a number
+get syncJsonParseLimit();
+// Set a number
+set syncJsonParseLimit(value);
+`}
+
+
+
+
+
+
+#### throwIfQueryPageSizeIsNotSet
+* When setting the `throwIfQueryPageSizeIsNotSet` convention to `true`,
+ an exception will be thrown if a query is performed without explicitly setting a page size.
+
+* This can be useful during development to identify potential performance bottlenecks
+ since there is no limitation on the number of results returned from the server.
+
+* DEFAULT: `false`
+
+
+
+{`// Returns a boolean
+get throwIfQueryPageSizeIsNotSet();
+// Set a boolean
+set throwIfQueryPageSizeIsNotSet(value);
+`}
+
+
+
+
+
+
+#### transformClassCollectionNameToDocumentIdPrefix
+* Use the `transformTypeCollectionNameToDocumentIdPrefix` convention to define a function that will
+ customize the document ID prefix from the the collection name.
+
+* DEFAULT:
+ By default, the document id prefix is determined as follows:
+
+| Number of uppercase letters in collection name | Document ID prefix |
+|--------------------------------------------------|-------------------------------------------------------------|
+| `<= 1` | Use the collection name with all lowercase letters |
+| `> 1` | Use the collection name as is, preserving the original case |
+
+
+
+{`// Returns a method
+get transformClassCollectionNameToDocumentIdPrefix();
+// Set a method
+set transformClassCollectionNameToDocumentIdPrefix(value);
+`}
+
+
+
+
+
+
+#### useCompression
+* Set the `useCompression` convention to true in order to accept the **response** in compressed format and the automatic decompression of the HTTP response content.
+
+* A `Gzip` compression is always applied when sending content in an HTTP request.
+
+* DEFAULT: `true`
+
+
+
+{`// Returns a boolean
+get useCompression();
+// Set a boolean
+set useCompression(value);
+`}
+
+
+
+
+
+
+
+#### useJsonlStreaming
+* Set the `useJsonlStreaming` convention to `true` when streaming query results as JSON Lines (JSONL) format.
+
+* DEFAULT: `true`
+
+
+
+{`// Returns a boolean
+get useJsonlStreaming();
+// Set a boolean
+set useJsonlStreaming(value);
+`}
+
+
+
+
+
+
+#### useOptimisticConcurrency
+* When setting the `useOptimisticConcurrency` convention to `true`,
+ Optimistic Concurrency checks will be applied for all sessions opened from the Document Store.
+
+* Learn more about Optimistic Concurrency and the various ways to enable it in article
+ [how to enable optimistic concurrency](../../client-api/session/configuration/how-to-enable-optimistic-concurrency.mdx).
+
+* DEFAULT: `false`
+
+
+
+{`// Returns a boolean
+get useOptimisticConcurrency();
+// Set a boolean
+set useOptimisticConcurrency(value);
+`}
+
+
+
+
+
+
+#### waitForIndexesAfterSaveChangesTimeout
+* Use the `waitForIndexesAfterSaveChangesTimeout` convention to set the default timeout for the
+ `documentSession.advanced.waitForIndexesAfterSaveChanges` method.
+
+* DEFAULT: 15 Seconds
+
+
+
+{`// Returns a number
+get waitForIndexesAfterSaveChangesTimeout();
+// Set a number
+set waitForIndexesAfterSaveChangesTimeout(value);
+`}
+
+
+
+
+
+
+#### waitForNonStaleResultsTimeout
+* Use the `waitForNonStaleResultsTimeout` convention to set the default timeout used by the
+ `waitForNonStaleResults` method when querying.
+
+* DEFAULT: 15 Seconds
+
+
+
+{`// Returns a number
+get waitForNonStaleResultsTimeout();
+// Set a number
+set waitForNonStaleResultsTimeout(value);
+`}
+
+
+
+
+
+
+#### waitForReplicationAfterSaveChangesTimeout
+* Use the `waitForReplicationAfterSaveChangesTimeout` convention to set the default timeout for the
+ `documentSession.advanced.waitForReplicationAfterSaveChanges`method.
+
+* DEFAULT: 15 Seconds
+
+
+
+{`// Returns a number
+get waitForReplicationAfterSaveChangesTimeout();
+// Set a number
+set waitForReplicationAfterSaveChangesTimeout(value);
+`}
+
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/configuration/_deserialization-csharp.mdx b/versioned_docs/version-7.1/client-api/configuration/_deserialization-csharp.mdx
new file mode 100644
index 0000000000..52783c3ae3
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/configuration/_deserialization-csharp.mdx
@@ -0,0 +1,98 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+Use the methods described in this page to customize the [conventions](../../client-api/configuration/conventions.mdx)
+by which entities are deserialized as they are received by the client.
+
+* In this page:
+ * [CustomizeJsonDeserializer](../../client-api/configuration/deserialization.mdx#customizejsondeserializer)
+ * [DeserializeEntityFromBlittable](../../client-api/configuration/deserialization.mdx#deserializeentityfromblittable)
+ * [PreserveDocumentPropertiesNotFoundOnModel](../../client-api/configuration/deserialization.mdx#preservedocumentpropertiesnotfoundonmodel)
+ * [DefaultRavenSerializationBinder](../../client-api/configuration/deserialization.mdx#defaultravenserializationbinder)
+ * [Number Deserialization](../../client-api/configuration/deserialization.mdx#number-deserialization)
+
+
+## Deserialization
+
+## CustomizeJsonDeserializer
+
+* The `JsonSerializer` object is used by the client to deserialize entities
+ loaded from the server.
+* Use the `CustomizeJsonDeserializer` convention to modify `JsonSerializer`
+ by registering a deserialization customization action.
+
+
+
+{`Conventions =
+\{
+ Serialization = new NewtonsoftJsonSerializationConventions
+ \{
+ CustomizeJsonDeserializer = serializer => throw new CodeOmitted()
+ \}
+\}
+`}
+
+
+
+## DeserializeEntityFromBlittable
+
+* Use the `DeserializeEntityFromBlittable` convention to customize entity
+ deserialization from a blittable JSON.
+
+
+
+{`Conventions =
+\{
+ Serialization = new NewtonsoftJsonSerializationConventions
+ \{
+ DeserializeEntityFromBlittable = (type, blittable) => throw new CodeOmitted()
+ \}
+\}
+`}
+
+
+
+## PreserveDocumentPropertiesNotFoundOnModel
+
+* Some document properties are not deserialized to an object.
+* Set the `PreserveDocumentPropertiesNotFoundOnModel` convention to `true`
+ to **preserve** such properties when the document is saved.
+* Set the `PreserveDocumentPropertiesNotFoundOnModel` convention to `false`
+ to **remove** such properties when the document is saved.
+* Default: `true`
+
+
+
+{`Conventions =
+\{
+ PreserveDocumentPropertiesNotFoundOnModel = true
+\}
+`}
+
+
+
+## DefaultRavenSerializationBinder
+
+Use the `DefaultRavenSerializationBinder` convention and its methods to
+prevent gadgets from running RCE (Remote Code Execution) attacks while
+data is deserialized by the client.
+
+Read about this security convention and maintaining deserialization security
+[here](../../client-api/security/deserialization-security.mdx).
+
+
+## Number Deserialization
+
+* RavenDB client supports all common numeric value types (including `int`, `long`,
+ `double`, `decimal`, etc.) out of the box.
+* Note that although deserialization of `decimals` is fully supported, there are
+ [server side limitations](../../server/kb/numbers-in-ravendb.mdx) to numbers in this range.
+* Other number types, like `BigInteger`, must be handled using custom deserialization.
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/configuration/_deserialization-java.mdx b/versioned_docs/version-7.1/client-api/configuration/_deserialization-java.mdx
new file mode 100644
index 0000000000..0b62b54a37
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/configuration/_deserialization-java.mdx
@@ -0,0 +1,24 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+## Customize ObjectMapper
+
+If you need to customize Jackson `ObjectMapper` object used by the client when sending entities to the server you can access and modify its instance:
+
+
+
+{`ObjectMapper entityMapper = conventions.getEntityMapper();
+entityMapper.configure(SerializationFeature.FAIL_ON_EMPTY_BEANS, true);
+`}
+
+
+
+## Numbers (de)serialization
+
+RavenDB client supports out of the box all common numeric value types: `int`, `long`, `double` etc.
+Note that although the (de)serialization of `decimals` is fully supported, there are [server side limitations](../../server/kb/numbers-in-ravendb.mdx) to numbers in that range.
+Other number types like `BigInteger` must be treated using custom (de)serialization.
+
+
diff --git a/versioned_docs/version-7.1/client-api/configuration/_serialization-csharp.mdx b/versioned_docs/version-7.1/client-api/configuration/_serialization-csharp.mdx
new file mode 100644
index 0000000000..0770186e13
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/configuration/_serialization-csharp.mdx
@@ -0,0 +1,121 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+Use the methods described in this page to customize the [conventions](../../client-api/configuration/conventions.mdx)
+by which entities are serialized as they are sent by the client to the server.
+
+* In this page:
+ * [CustomizeJsonSerializer](../../client-api/configuration/serialization.mdx#customizejsonserializer)
+ * [JsonContractResolver](../../client-api/configuration/serialization.mdx#jsoncontractresolver)
+ * [BulkInsert.TrySerializeEntityToJsonStream](../../client-api/configuration/serialization.mdx#bulkinserttryserializeentitytojsonstream)
+ * [IgnoreByRefMembers and IgnoreUnsafeMembers](../../client-api/configuration/serialization.mdx#ignorebyrefmembers-and-ignoreunsafemembers)
+
+
+## Serialization
+
+## CustomizeJsonSerializer
+
+* The `JsonSerializer` object is used by the client to serialize entities
+ sent by the client to the server.
+* Use the `CustomizeJsonSerializer ` convention to modify `JsonSerializer`
+ by registering a serialization customization action.
+
+
+
+{`Serialization = new NewtonsoftJsonSerializationConventions
+\{
+ CustomizeJsonSerializer = serializer => throw new CodeOmitted()
+\}
+`}
+
+
+
+## JsonContractResolver
+
+* The default `JsonContractResolver` convention used by RavenDB will serialize
+ **all** properties and **all** public fields.
+* Change this behavior by providing your own implementation of the `IContractResolver`
+ interface.
+
+
+
+{`Serialization = new NewtonsoftJsonSerializationConventions
+\{
+ JsonContractResolver = new CustomJsonContractResolver()
+\}
+`}
+
+
+
+
+
+{`public class CustomJsonContractResolver : IContractResolver
+\{
+ public JsonContract ResolveContract(Type type)
+ \{
+ throw new CodeOmitted();
+ \}
+\}
+`}
+
+
+
+* You can also customize the behavior of the **default resolver** by inheriting
+ from `DefaultRavenContractResolver` and overriding specific methods.
+
+
+
+{`public class CustomizedRavenJsonContractResolver : DefaultRavenContractResolver
+\{
+ public CustomizedRavenJsonContractResolver(ISerializationConventions conventions) : base(conventions)
+ \{
+ \}
+
+ protected override JsonProperty CreateProperty(MemberInfo member, MemberSerialization memberSerialization)
+ \{
+ throw new CodeOmitted();
+ \}
+\}
+`}
+
+
+
+## BulkInsert.TrySerializeEntityToJsonStream
+
+* Adjust [Bulk Insert](../../client-api/bulk-insert/how-to-work-with-bulk-insert-operation.mdx)
+ behavior by using the `TrySerializeEntityToJsonStream` convention to register a custom
+ serialization implementation.
+
+
+
+{`BulkInsert =
+\{
+ TrySerializeEntityToJsonStream = (entity, metadata, writer) => throw new CodeOmitted(),
+\}
+`}
+
+
+
+## IgnoreByRefMembers and IgnoreUnsafeMembers
+
+* By default, if you try to store an entity with `ref` or unsafe members,
+ the Client will throw an exception when [`session.SaveChanges()`](../../client-api/session/saving-changes.mdx)
+ is called.
+* Set the `IgnoreByRefMembers` convention to `true` to simply ignore `ref`
+ members when an attempt to store an entity with `ref` members is made.
+ The entity will be uploaded to the server with all non-`ref` members without
+ throwing an exception.
+ The document structure on the server-side will not contain fields for those
+ `ref` members.
+* Set the `IgnoreUnsafeMembers` convention to `true` to ignore all pointer
+ members in the same manner.
+* `IgnoreByRefMembers` default value: `false`
+* `IgnoreUnsafeMembers` default value: `false`
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/configuration/_serialization-java.mdx b/versioned_docs/version-7.1/client-api/configuration/_serialization-java.mdx
new file mode 100644
index 0000000000..0b62b54a37
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/configuration/_serialization-java.mdx
@@ -0,0 +1,24 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+## Customize ObjectMapper
+
+If you need to customize Jackson `ObjectMapper` object used by the client when sending entities to the server you can access and modify its instance:
+
+
+
+{`ObjectMapper entityMapper = conventions.getEntityMapper();
+entityMapper.configure(SerializationFeature.FAIL_ON_EMPTY_BEANS, true);
+`}
+
+
+
+## Numbers (de)serialization
+
+RavenDB client supports out of the box all common numeric value types: `int`, `long`, `double` etc.
+Note that although the (de)serialization of `decimals` is fully supported, there are [server side limitations](../../server/kb/numbers-in-ravendb.mdx) to numbers in that range.
+Other number types like `BigInteger` must be treated using custom (de)serialization.
+
+
diff --git a/versioned_docs/version-7.1/client-api/configuration/conventions.mdx b/versioned_docs/version-7.1/client-api/configuration/conventions.mdx
new file mode 100644
index 0000000000..4dddba9f76
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/configuration/conventions.mdx
@@ -0,0 +1,42 @@
+---
+title: "Conventions"
+hide_table_of_contents: true
+sidebar_label: Conventions
+sidebar_position: 0
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import ConventionsCsharp from './_conventions-csharp.mdx';
+import ConventionsNodejs from './_conventions-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/configuration/deserialization.mdx b/versioned_docs/version-7.1/client-api/configuration/deserialization.mdx
new file mode 100644
index 0000000000..bb0a0498ba
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/configuration/deserialization.mdx
@@ -0,0 +1,44 @@
+---
+title: "Conventions: Deserialization"
+hide_table_of_contents: true
+sidebar_label: DeSerialization
+sidebar_position: 2
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import DeserializationCsharp from './_deserialization-csharp.mdx';
+import DeserializationJava from './_deserialization-java.mdx';
+
+export const supportedLanguages = ["csharp", "java"];
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/configuration/identifier-generation/_category_.json b/versioned_docs/version-7.1/client-api/configuration/identifier-generation/_category_.json
new file mode 100644
index 0000000000..ac4d1b2d83
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/configuration/identifier-generation/_category_.json
@@ -0,0 +1,4 @@
+{
+ "position": 4,
+ "label": Identifier generation,
+}
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/configuration/identifier-generation/_global-csharp.mdx b/versioned_docs/version-7.1/client-api/configuration/identifier-generation/_global-csharp.mdx
new file mode 100644
index 0000000000..1bdd8ddec4
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/configuration/identifier-generation/_global-csharp.mdx
@@ -0,0 +1,153 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+#Global Identifier Generation Conventions
+
+
+
+Documents that have the same `@collection` metadata belong to the same [collection](../../../client-api/faq/what-is-a-collection.mdx) on the server side. Collection names are also used to build document identifiers. There are two functions that the client uses to determine a collection name for a given type. The first one is used for standard objects with a well-defined type:
+
+
+
+{`FindCollectionName = type => // function that provides the collection name based on the entity type
+`}
+
+
+
+The second one is dedicated for dynamic objects:
+
+
+
+{`FindCollectionNameForDynamic =
+ dynamicObject => // function to determine the collection name for the given dynamic object
+`}
+
+
+
+
+
+The `FindCollectionNameForDynamic` only works on objects that inherit from [IDynamicMetaObjectProvider](https://docs.microsoft.com/en-us/dotnet/api/system.dynamic.idynamicmetaobjectprovider) interface. In .NET there are two built-in types that implement that interface, the [ExpandoObject](https://docs.microsoft.com/en-us/dotnet/api/system.dynamic.expandoobject) and [DynamicObject](https://docs.microsoft.com/en-us/dotnet/api/system.dynamic.dynamicobject).
+
+For example, if we want to determine a collection using a `Collection` property from a dynamic object, we need to set `FindCollectionNameForDynamic` as follows:
+
+
+
+{`FindCollectionNameForDynamic = o => o.Collection
+`}
+
+
+
+After that we can store our dynamic object as follows:
+
+
+
+{`dynamic car = new ExpandoObject();
+car.Name = "Ford";
+car.Collection = "Cars";
+
+session.Store(car);
+
+dynamic animal = new ExpandoObject();
+animal.Name = "Rhino";
+animal.Collection = "Animals";
+
+session.Store(animal);
+`}
+
+
+
+
+
+## TransformTypeCollectionNameToDocumentIdPrefix
+
+Collection names determined by recently described convention functions aren't directly used as prefixes in document identifiers. There is a convention function called `TransformTypeCollectionNameToDocumentIdPrefix` which takes the collection name and produces the prefix:
+
+
+
+{`TransformTypeCollectionNameToDocumentIdPrefix =
+ collectionName => // transform the collection name to the prefix of a identifier, e.g. [prefix]/12
+`}
+
+
+
+Its default behavior for a collection which contains one upper character is to simply convert it to lower case string. `Users` would be transformed into `users`. For collection names containing more upper characters, there will be no change. The collection name: `LineItems` would output the following prefix: `LineItems`.
+
+## FindClrTypeName and FindClrType
+
+In the metadata of all documents stored in a database, you can find the following property which specifies the client-side type. For instance:
+
+
+
+{`\{
+ "Raven-Clr-Type": "Orders.Shipper, Northwind"
+\}
+`}
+
+
+
+This property is used by RavenDB client to perform a conversion between a .NET object and a JSON document stored in a database. A function responsible for retrieving the CLR type of an entity is defined by `FindClrTypeName` convention:
+
+
+
+{`FindClrTypeName = type => // use reflection to determine the type;
+`}
+
+
+
+To properly perform the revert conversion that is from a JSON result into a .NET object, we need to retrieve the CLR type from the `Raven-Clr-Type` metadata:
+
+
+
+{`FindClrType = (id, doc) =>
+\{
+ if (doc.TryGet(Constants.Documents.Metadata.Key, out BlittableJsonReaderObject metadata) &&
+ metadata.TryGet(Constants.Documents.Metadata.RavenClrType, out string clrType))
+ return clrType;
+
+ return null;
+\},
+`}
+
+
+
+## FindIdentityProperty
+
+The client must know where in your entity an identifier is stored to be properly able to transform it into JSON document. It uses the `FindIdentityProperty` convention for that. The default and very common convention is that a property named `Id` is the identifier, so is the default one:
+
+
+
+{`FindIdentityProperty = memberInfo => memberInfo.Name == "Id"
+`}
+
+
+
+You can provide a customization based on the `MemberInfo` parameter to indicate which property or field keeps the identifier. The client will iterate over all object properties and take the first one according to the defined predicate.
+
+## FindIdentityPropertyNameFromCollectionName
+
+It can happen that sometimes the results returned by the server don't have identifiers defined (for example if you run a projection query). However, they have `@collection` in metadata.
+
+To perform the conversion into a .NET object, a function that finds the identity property name for a given entity name is applied:
+
+
+
+{`FindIdentityPropertyNameFromCollectionName = collectionName => "Id"
+`}
+
+
+
+## IdentityPartsSeparator
+
+According to the default, convention document identifiers have the following format: `[collectionName]/[identityValue]-[nodeTag]`. The slash character (`/`) separates the two parts of an identifier.
+You can overwrite it by using `IdentityPartsSeparator` convention. Its default definition is:
+
+
+
+{`IdentityPartsSeparator = "/"
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/configuration/identifier-generation/_global-java.mdx b/versioned_docs/version-7.1/client-api/configuration/identifier-generation/_global-java.mdx
new file mode 100644
index 0000000000..862d1299b6
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/configuration/identifier-generation/_global-java.mdx
@@ -0,0 +1,112 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+#Global Identifier Generation Conventions
+
+
+
+Documents that have the same `@collection` metadata belong to the same [collection](../../../client-api/faq/what-is-a-collection.mdx) on the server side. Collection names are also used to build document identifiers.
+
+
+
+{`conventions.setFindCollectionName(
+ clazz -> // function that provides the collection name based on the entity class
+`}
+
+
+
+## TransformClassCollectionNameToDocumentIdPrefix
+
+Collection names determined by recently described convention functions aren't directly used as prefixes in document identifiers. There is a convention function called `TransformClassCollectionNameToDocumentIdPrefix` which takes the collection name and produces the prefix:
+
+
+
+{`conventions.setTransformClassCollectionNameToDocumentIdPrefix(
+ collectionName -> // transform the collection name to the prefix of a identifier, e.g. [prefix]/12
+`}
+
+
+
+Its default behavior is that for a collection which contains one upper character it simply converts it to lower case string. `Users` would be transformed into `users`. For collection names containing more upper characters there will be no change. The collection name: `LineItems` would output the following prefix: `LineItems`.
+
+## FindJavaClassName and FindJavaClass
+
+In the metadata of all documents stored by RavenDB Java Client, you can find the following property which specifies the client-side type. For instance:
+
+
+
+{`\{
+ "Raven-Java-Type": "com.example.Customer"
+\}
+`}
+
+
+
+This property is used by RavenDB client to perform a conversion between a Java object and a JSON document stored in a database. A function responsible for retrieving the Java class of an entity is defined by `findJavaClassName` convention:
+
+
+
+{`conventions.setFindJavaClassName(
+ clazz -> // use reflection to determinate the type
+`}
+
+
+
+To properly perform the revert conversion that is from a JSON result into a Java object, we need to retrieve the Java class from the `Raven-Java-Type` metadata:
+
+
+
+{`conventions.setFindJavaClass((id, doc) -> \{
+ return Optional.ofNullable((ObjectNode) doc.get(Constants.Documents.Metadata.KEY))
+ .map(x -> x.get(Constants.Documents.Metadata.RAVEN_JAVA_TYPE))
+ .map(x -> x.asText())
+ .orElse(null);
+\});
+`}
+
+
+
+
+## FindIdentityProperty
+
+The client must know where in your entity an identifier is stored to be properly able to transform it into JSON document. It uses the `FindIdentityProperty` convention for that. The default and very common convention is that a property named `Id` is the identifier, so is the default one:
+
+
+
+{`conventions.setFindIdentityProperty(fieldInfo -> "Id".equals(fieldInfo.getName()));
+`}
+
+
+
+You can provide a customization based on the `FieldInfo` parameter to indicate which property or field keeps the identifier. The client will iterate over all object properties and take the first one according to the defined predicate.
+
+## FindIdentityPropertyNameFromCollectionName
+
+It can happen that sometimes the results returned by the server don't have identifiers defined (for example if you run a projection query) however they have `@collection` in metadata.
+
+To perform the conversion into a Java object, a function that finds the identity property name for a given entity name is applied:
+
+
+
+{`conventions.setFindIdentityPropertyNameFromCollectionName(
+ collectionName -> "Id"
+);
+`}
+
+
+
+## IdentityPartsSeparator
+
+According to the default, convention document identifiers have the following format: `[collectionName]/[identityValue]-[nodeTag]`. The slash character (`/`) separates the two parts of an identifier.
+You can overwrite it by using `IdentityPartsSeparator` convention. Its default definition is:
+
+
+
+{`conventions.setIdentityPartsSeparator("/");
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/configuration/identifier-generation/_global-nodejs.mdx b/versioned_docs/version-7.1/client-api/configuration/identifier-generation/_global-nodejs.mdx
new file mode 100644
index 0000000000..06bae1337e
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/configuration/identifier-generation/_global-nodejs.mdx
@@ -0,0 +1,115 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+#Global Identifier Generation Conventions
+
+
+
+Documents that have the same `@collection` metadata belong to the same [collection](../../../client-api/faq/what-is-a-collection.mdx) on the server side. Collection names are also used to build document identifiers.
+
+
+
+{`conventions.findCollectionName =
+ type => // function that provides the collection name based on the entity type
+`}
+
+
+
+## TransformClassCollectionNameToDocumentIdPrefix
+
+Collection names determined by recently described convention functions aren't directly used as prefixes in document identifiers. There is a convention function called `transformClassCollectionNameToDocumentIdPrefix()` which takes the collection name and produces the prefix:
+
+
+
+{`conventions.transformClassCollectionNameToDocumentIdPrefix =
+ collectionName => // transform the collection name to the prefix of an identifier, e.g. [prefix]/12
+`}
+
+
+
+Its default behavior is that for a collection which contains one upper character it simply converts it to lower case string. `Users` would be transformed into `users`. For collection names containing more upper characters there will be no change. The collection name: `LineItems` would output the following prefix: `LineItems`.
+
+## FindJsTypeName and FindJsType
+
+In the metadata of all documents stored by RavenDB Node.js Client, you can find the following property which specifies the client-side type. For instance:
+
+
+
+{`\{
+ "Raven-Node-Type": "Customer"
+\}
+`}
+
+
+
+This property is used by RavenDB client to perform a conversion between a JS object and a JSON document stored in a database. A function responsible for retrieving the JS type of an entity is defined by `findJsTypeName()` convention:
+
+
+
+{`conventions.findJsTypeName =
+ type => // determine the type name based on type
+`}
+
+
+
+To properly perform the reverse conversion that is from a JSON result into a JS object, we need to retrieve the JS type from the `Raven-Node-Type` metadata:
+
+
+
+{`conventions.findJsType((id, doc) => \{
+ const metadata = doc["@metadata"];
+ if (metadata) \{
+ const jsType = metadata["Raven-Node-Type"];
+ return this.getJsTypeByDocumentType(jsType);
+ \}
+
+ return null;
+\});
+`}
+
+
+
+
+## FindIdentityPropertyNameFromCollectionName
+
+It can happen that sometimes the results returned by the server don't have identifiers defined (for example if you run a projection query) however they have `@collection` in metadata.
+
+To perform the conversion into a JS object, a function that finds the identity property name for a given collection name is applied:
+
+
+
+{`conventions.findIdentityPropertyNameFromCollectionName =
+ collectionName => "id";
+`}
+
+
+
+## IdentityPartsSeparator
+
+By default, convention document identifiers have the following format: `[collectionName]/[identityValue]-[nodeTag]`. The slash character (`/`) separates the two parts of an identifier.
+You can overwrite it by using `IdentityPartsSeparator` convention. Its default definition is:
+
+
+
+{`conventions.identityPartsSeparator = "/";
+`}
+
+
+
+## FindCollectionNameForObjectLiteral
+
+This convention is *not defined by default*. It's only useful when using object literals as entities. It defines how the client obtains a collection name for an object literal. If it's undefined object literals stored with `session.store()` are going to land up in `@empty` collection having a UUID for an ID.
+
+For instance here's mapping of the *category* field to collection name:
+
+
+{`conventions.findCollectionNameForObjectLiteral =
+ entity => entity["category"];
+ // function that provides the collection name based on the entity object
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/configuration/identifier-generation/_type-specific-csharp.mdx b/versioned_docs/version-7.1/client-api/configuration/identifier-generation/_type-specific-csharp.mdx
new file mode 100644
index 0000000000..4307b91211
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/configuration/identifier-generation/_type-specific-csharp.mdx
@@ -0,0 +1,134 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+#Type-Specific Identifier Generation
+
+[In the previous article](../../../client-api/configuration/identifier-generation/global.mdx), Global Identifier generation conventions were introduced. Any customization made by using those conventions changes the behavior for all stored entities.
+Now we will show how to override the default ID generation in a more granular way, for particular types of entities.
+
+To override default document identifier generation algorithms, you can register custom conventions per an entity type. You can include your own identifier generation logic.
+
+
+
+
+
+{`DocumentConventions RegisterAsyncIdConvention(Func> func);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **func** | Func<string, TEntity, Task<string>> | Identifier generation function that supplies a result in async way for given database name (`string`) and entity object (`TEntity`). |
+
+| Return Value | |
+| ------------- | ----- |
+| DocumentConventions | Current `DocumentConventions` instance. |
+
+
+This method applied to both synchronous and asynchronous operations
+
+
+
+The database name parameter is passed to the register convention methods to allow users to make Id generation decision per database.
+
+
+### Example
+
+Let's say that you want to use semantic identifiers for `Employee` objects. Instead of `employee/[identity]` you want to have identifiers like `employees/[lastName]/[firstName]`
+(for the sake of simplicity, let us not consider the uniqueness of such identifiers). What you need to do is to create the convention that will combine the `employee` prefix, `LastName` and `FirstName` properties of an employee.
+
+
+
+{`store.Conventions.RegisterAsyncIdConvention(
+ (dbname, employee) =>
+ Task.FromResult(string.Format("employees/\{0\}/\{1\}", employee.LastName, employee.FirstName)));
+`}
+
+
+
+Now, when you store a new entity:
+
+
+
+{`using (var session = store.OpenSession())
+\{
+ session.Store(new Employee
+ \{
+ FirstName = "James",
+ LastName = "Bond"
+ \});
+
+ session.SaveChanges();
+\}
+`}
+
+
+
+the client will associate the `employees/Bond/James` identifier with it.
+
+## Inheritance
+
+Registered conventions are inheritance-aware so all types that can be assigned from registered type will fall into that convention according to inheritance-hierarchy tree.
+
+### Example
+
+If we create a new class `EmployeeManager` that will derive from our `Employee` class and keep the convention registered in the last example, both types will use the following:
+
+
+
+{`using (var session = store.OpenSession())
+\{
+ session.Store(new Employee // employees/Smith/Adam
+ \{
+ FirstName = "Adam",
+ LastName = "Smith"
+ \});
+
+ session.Store(new EmployeeManager // employees/Jones/David
+ \{
+ FirstName = "David",
+ LastName = "Jones"
+ \});
+
+ session.SaveChanges();
+\}
+`}
+
+
+
+If we register two conventions, one for `Employee` and the second for `EmployeeManager` then they will be picked for their specific types.
+
+
+
+{`store.Conventions.RegisterAsyncIdConvention(
+ (dbname, employee) =>
+ Task.FromResult(string.Format("employees/\{0\}/\{1\}", employee.LastName, employee.FirstName)));
+
+store.Conventions.RegisterAsyncIdConvention(
+ (dbname, employee) =>
+ Task.FromResult(string.Format("managers/\{0\}/\{1\}", employee.LastName, employee.FirstName)));
+
+using (var session = store.OpenSession())
+\{
+ session.Store(new Employee // employees/Smith/Adam
+ \{
+ FirstName = "Adam",
+ LastName = "Smith"
+ \});
+
+ session.Store(new EmployeeManager // managers/Jones/David
+ \{
+ FirstName = "David",
+ LastName = "Jones"
+ \});
+
+ session.SaveChanges();
+\}
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/configuration/identifier-generation/_type-specific-java.mdx b/versioned_docs/version-7.1/client-api/configuration/identifier-generation/_type-specific-java.mdx
new file mode 100644
index 0000000000..3c9a0bfaf2
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/configuration/identifier-generation/_type-specific-java.mdx
@@ -0,0 +1,124 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+#Type-Specific Identifier Generation
+
+[In the previous article](../../../client-api/configuration/identifier-generation/global.mdx), Global Identifier generation conventions were introduced. Any customization made by using those conventions changes the behavior for all stored entities.
+Now we will show how to override the default ID generation in a more granular way, for particular types of entities.
+
+To override default document identifier generation algorithms, you can register custom conventions per an entity type. You can include your own identifier generation logic.
+
+
+
+
+
+{`public DocumentConventions registerIdConvention(Class clazz, BiFunction function);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **function** | BiFunction<String, TEntity, String> | Identifier generation function that supplies a result for given database name (`String`) and entity object (`TEntity`). |
+
+| Return Value | |
+| ------------- | ----- |
+| DocumentConventions | Current `DocumentConventions` instance. |
+
+
+The database name parameter is passed to the register convention methods to allow users to make Id generation decision per database.
+
+
+### Example
+
+Let's say that you want to use semantic identifiers for `Employee` objects. Instead of `employee/[identity]` you want to have identifiers like `employees/[lastName]/[firstName]`
+(for the sake of simplicity, let us not consider the uniqueness of such identifiers). What you need to do is to create the convention that will combine the `employee` prefix, `LastName` and `FirstName` properties of an employee.
+
+
+
+{`store.getConventions().registerIdConvention(Employee.class,
+ (dbName, employee) ->
+ String.format("employees/%s/%s", employee.getLastName(), employee.getFirstName()));
+`}
+
+
+
+Now, when you store a new entity:
+
+
+
+{`try (IDocumentSession session = store.openSession()) \{
+ Employee employee = new Employee();
+ employee.setFirstName("James");
+ employee.setLastName("Bond");
+
+ session.store(employee);
+ session.saveChanges();
+\}
+`}
+
+
+
+the client will associate the `employees/Bond/James` identifier with it.
+
+## Inheritance
+
+Registered conventions are inheritance-aware so all types that can be assigned from registered type will fall into that convention according to inheritance-hierarchy tree.
+
+### Example
+
+If we create a new class `EmployeeManager` that will derive from our `Employee` class and keep the convention registered in the last example, both types will use the following:
+
+
+
+{`try (IDocumentSession session = store.openSession()) \{
+ Employee adam = new Employee();
+ adam.setFirstName("Adam");
+ adam.setLastName("Smith");
+ session.store(adam); // employees/Smith/Adam
+
+ EmployeeManager david = new EmployeeManager();
+ david.setFirstName("David");
+ david.setLastName("Jones");
+ session.store(david); // employees/Jones/David
+
+ session.saveChanges();
+\}
+`}
+
+
+
+If we register two conventions, one for `Employee` and the second for `EmployeeManager` then they will be picked for their specific types.
+
+
+
+{`store.getConventions().registerIdConvention(Employee.class,
+ (dbName, employee) ->
+ String.format("employees/%s/%s", employee.getLastName(), employee.getFirstName())
+);
+
+store.getConventions().registerIdConvention(EmployeeManager.class,
+ (dbName, employee) ->
+ String.format("managers/%s/%s", employee.getLastName(), employee.getFirstName())
+);
+
+try (IDocumentSession session = store.openSession()) \{
+ Employee adam = new Employee();
+ adam.setFirstName("Adam");
+ adam.setLastName("Smith");
+ session.store(adam); // employees/Smith/AdamReadBalanceBehavior
+
+ EmployeeManager david = new EmployeeManager();
+ david.setFirstName("David");
+ david.setLastName("Jones");
+ session.store(david); // managers/Jones/David
+
+ session.saveChanges();
+\}
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/configuration/identifier-generation/_type-specific-nodejs.mdx b/versioned_docs/version-7.1/client-api/configuration/identifier-generation/_type-specific-nodejs.mdx
new file mode 100644
index 0000000000..c4d7801587
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/configuration/identifier-generation/_type-specific-nodejs.mdx
@@ -0,0 +1,93 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+#Type-Specific Identifier Generation
+
+[In the previous article](../../../client-api/configuration/identifier-generation/global.mdx), Global Identifier generation conventions were introduced. Any customization made by using those conventions changes the behavior for all stored entities.
+Now we will show how to override the default ID generation in a more granular way, for particular types of entities.
+
+To override default document identifier generation algorithms, you can register custom conventions per an entity type. You can include your own identifier generation logic.
+
+
+
+
+
+{`conventions.registerIdConvention(clazz, idConvention);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| clazz | class or object | Entity type |
+| idConvention | function `(databaseName, entity) => Promise` | Identifier generation function that supplies a result for given database name and entity object. Must return a `Promise` resolving to a string. |
+
+| Return Value | |
+| ------------- | ----- |
+| DocumentConventions | Current `DocumentConventions` instance. |
+
+
+The database name parameter is passed to the register convention methods to allow users to make Id generation decision per database.
+
+
+### Example
+
+Let's say that you want to use semantic identifiers for `Employee` objects. Instead of `employee/[identity]` you want to have identifiers like `employees/[lastName]/[firstName]`
+(for the sake of simplicity, let us not consider the uniqueness of such identifiers). What you need to do is to create the convention that will combine the `employee` prefix, `LastName` and `FirstName` properties of an employee.
+
+
+
+{`store.conventions.registerIdConvention(Employee,
+ (dbName, entity) => Promise.resolve(\`employees/$\{entity.lastName\}/$\{entity.firstName\}\`));
+
+// or using async keyword
+store.conventions.registerIdConvention(Employee,
+ async (dbName, entity) => \`employees/$\{entity.lastName\}/$\{entity.firstName\}\`);
+`}
+
+
+
+Now, when you store a new entity:
+
+
+
+{`const session = store.openSession();
+const employee = new Employee("James", "Bond");
+
+await session.store(employee);
+await session.saveChanges();
+`}
+
+
+
+the client will associate the `employees/Bond/James` identifier with it.
+
+
+ID convention function must return a Promise since it *can* be asynchronous.
+
+
+### Example: Object literal based entities
+
+
+
+{`// for object literal based entities you can pass type descriptor object
+const typeDescriptor = \{
+ name: "Employee",
+ isType(entity) \{
+ // if it quacks like a duck... ekhm employee
+ return entity
+ && "firstName" in entity
+ && "lastName" in entity
+ && "boss" in entity;
+ \}
+\};
+
+store.conventions.registerIdConvention(typeDescriptor,
+ async (dbName, entity) => \`employees/$\{entity.lastName\}/$\{entity.firstName\}\`);
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/configuration/identifier-generation/global.mdx b/versioned_docs/version-7.1/client-api/configuration/identifier-generation/global.mdx
new file mode 100644
index 0000000000..56a95491c6
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/configuration/identifier-generation/global.mdx
@@ -0,0 +1,38 @@
+---
+title: "Global Identifier Generation Conventions"
+hide_table_of_contents: true
+sidebar_label: Global
+sidebar_position: 0
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import GlobalCsharp from './_global-csharp.mdx';
+import GlobalJava from './_global-java.mdx';
+import GlobalNodejs from './_global-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/configuration/identifier-generation/type-specific.mdx b/versioned_docs/version-7.1/client-api/configuration/identifier-generation/type-specific.mdx
new file mode 100644
index 0000000000..4804187b63
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/configuration/identifier-generation/type-specific.mdx
@@ -0,0 +1,37 @@
+---
+title: "Type-Specific Identifier Generation"
+hide_table_of_contents: true
+sidebar_label: Type-specific
+sidebar_position: 1
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import TypeSpecificCsharp from './_type-specific-csharp.mdx';
+import TypeSpecificJava from './_type-specific-java.mdx';
+import TypeSpecificNodejs from './_type-specific-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/configuration/load-balance/_category_.json b/versioned_docs/version-7.1/client-api/configuration/load-balance/_category_.json
new file mode 100644
index 0000000000..b8dcac545a
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/configuration/load-balance/_category_.json
@@ -0,0 +1,4 @@
+{
+ "position": 3,
+ "label": Load balancing client requests,
+}
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/configuration/load-balance/_load-balance-behavior-csharp.mdx b/versioned_docs/version-7.1/client-api/configuration/load-balance/_load-balance-behavior-csharp.mdx
new file mode 100644
index 0000000000..e5d2d7cb04
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/configuration/load-balance/_load-balance-behavior-csharp.mdx
@@ -0,0 +1,266 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* The `loadBalanceBehavior` configuration allows you to specify which sessions should
+ communicate with the same node.
+
+* Sessions that are assigned the **same context** will have all their _Read_ & _Write_
+ requests routed to the **same node**. Gain load balancing by assigning **different contexts**
+ to **different sessions**.
+* In this page:
+ * [LoadBalanceBehavior options](../../../client-api/configuration/load-balance/load-balance-behavior.mdx#loadbalancebehavior-options)
+ * [Initialize LoadBalanceBehavior on the client](../../../client-api/configuration/load-balance/load-balance-behavior.mdx#initialize-loadbalancebehavior-on-the-client)
+ * [Set LoadBalanceBehavior on the server:](../../../client-api/configuration/load-balance/load-balance-behavior.mdx#set-loadbalancebehavior-on-the-server)
+ * [By operation](../../../client-api/configuration/load-balance/load-balance-behavior.mdx#set-loadbalancebehavior-on-the-server---by-operation)
+ * [From Studio](../../../client-api/configuration/load-balance/load-balance-behavior.mdx#set-loadbalancebehavior-on-the-server---from-studio)
+ * [When to use](../../../client-api/configuration/load-balance/load-balance-behavior.mdx#when-to-use)
+
+
+## LoadBalanceBehavior options
+
+### `None` (default option)
+
+* Requests will be handled based on the `ReadBalanceBehavior` configuration.
+ See the conditional flow described in [Client logic for choosing a node](../../../client-api/configuration/load-balance/overview.mdx#client-logic-for-choosing-a-node).
+ * **_Read_** requests:
+ The client will calculate the target node from the configured [ReadBalanceBehavior Option](../../../client-api/configuration/load-balance/read-balance-behavior.mdx#readbalancebehavior-options).
+ * **_Write_** requests:
+ Will be sent to the [preferred node](../../../client-api/configuration/load-balance/overview.mdx#the-preferred-node).
+ The data will then be replicated to all the other nodes in the database group.
+### `UseSessionContext`
+
+* **Load-balance**
+
+ * When this option is enabled, the client will calculate the target node from the session-id.
+ The session-id is hashed from a **context string** and an optional **seed** given by the user.
+ The context string together with the seed are referred to as **"The session context"**.
+
+ * Per session, the client will select a node from the topology list based on this session-context.
+ So sessions that use the **same** context will target the **same** node.
+
+ * All **_Read & Write_** requests made on the session (i.e a query or a load request, etc.)
+ will address this calculated node.
+ _Read & Write_ requests that are made on the store (i.e. executing an [operation](../../../client-api/operations/what-are-operations.mdx))
+ will go to the preferred node.
+
+ * All _Write_ requests will be replicated to all the other nodes in the database group as usual.
+
+* **Failover**
+
+ * In case of a failure, the client will try to access the next node from the topology nodes list.
+
+
+
+## Initialize LoadBalanceBehavior on the client
+
+* The `LoadBalanceBehavior` convention can be set **on the client** when initializing the Document Store.
+ This will set the load balance behavior for the default database that is set on the store.
+
+* This setting can be **overriden** by setting 'LoadBalanceBehavior' on the server, see [below](../../../client-api/configuration/load-balance/load-balance-behavior.mdx#set-loadbalancebehavior-on-the-server).
+**Initialize conventions**:
+
+
+
+{`// Initialize 'LoadBalanceBehavior' on the client:
+var documentStore = new DocumentStore
+\{
+ Urls = new[] \{"ServerURL_1", "ServerURL_2", "..."\},
+ Database = "DefaultDB",
+ Conventions = new DocumentConventions
+ \{
+ // Enable the session-context feature
+ // If this is not enabled then a context string set in a session will be ignored
+ LoadBalanceBehavior = LoadBalanceBehavior.UseSessionContext,
+
+ // Assign a method that sets the default context string
+ // This string will be used for sessions that do Not provide a context string
+ // A sample GetDefaultContext method is defined below
+ LoadBalancerPerSessionContextSelector = GetDefaultContext,
+
+ // Set a seed
+ // The seed is 0 by default, provide any number to override
+ LoadBalancerContextSeed = 5
+ \}
+\}.Initialize();
+`}
+
+
+
+
+{`// A customized method for getting a default context string
+private string GetDefaultContext(string dbName)
+\{
+ // Method is invoked by RavenDB with the database name
+ // Use that name - or return any string of your choice
+ return "DefaultContextString";
+\}
+`}
+
+
+**Session usage**:
+
+
+
+{`// Open a session that will use the DEFAULT store values:
+using (var session = documentStore.OpenSession())
+\{
+ // For all Read & Write requests made in this session,
+ // node to access is calculated from string & seed values defined on the store
+ var employee = session.Load("employees/1-A");
+\}
+`}
+
+
+
+
+{`// Open a session that will use a UNIQUE context string:
+using (var session = documentStore.OpenSession())
+\{
+ // Call SetContext, pass a unique context string for this session
+ session.Advanced.SessionInfo.SetContext("SomeOtherContext");
+
+ // For all Read & Write requests made in this session,
+ // node to access is calculated from the unique string & the seed defined on the store
+ var employee = session.Load("employees/1-A");
+\}
+`}
+
+
+
+
+
+## Set LoadBalanceBehavior on the server
+
+
+
+**Note**:
+
+* Setting the load balance behavior on the server, either by an **Operation** or from the **Studio**,
+ only 'enables the feature' and sets the seed.
+
+* For the feature to be in effect, you still need to define the context string itself:
+ * either per session, call `session.Advanced.SessionInfo.SetContext`
+ * or, on the document store, set a default value for - `LoadBalancerPerSessionContextSelector`
+
+
+#### Set LoadBalanceBehavior on the server - by operation:
+
+* The `LoadBalanceBehavior` configuration can be set **on the server** by sending an [operation](../../../client-api/operations/what-are-operations.mdx).
+
+* The operation can modify the default database only, or all databases - see examples below.
+
+* Once configuration on the server has changed, the running client will get updated with the new settings.
+ See [keeping client up-to-date](../../../client-api/configuration/load-balance/overview.mdx#keeping-the-client-topology-up-to-date).
+
+
+
+
+{`// Setting 'LoadBalanceBehavior' on the server by sending an operation:
+using (documentStore)
+{
+ // Define the client configuration to put on the server
+ var configurationToSave = new ClientConfiguration
+ {
+ // Enable the session-context feature
+ // If this is not enabled then a context string set in a session will be ignored
+ LoadBalanceBehavior = LoadBalanceBehavior.UseSessionContext,
+
+ // Set a seed
+ // The seed is 0 by default, provide any number to override
+ LoadBalancerContextSeed = 10,
+
+ // NOTE:
+ // The session's context string is Not set on the server
+ // You still need to set it on the client:
+ // * either as a convention on the document store
+ // * or pass it to 'SetContext' method on the session
+
+ // Configuration will be in effect when Disabled is set to false
+ Disabled = false
+ };
+
+ // Define the put configuration operation for the DEFAULT database
+ var putConfigurationOp = new PutClientConfigurationOperation(configurationToSave);
+
+ // Execute the operation by passing it to Maintenance.Send
+ documentStore.Maintenance.Send(putConfigurationOp);
+
+ // After the operation has executed:
+ // all Read & Write requests, per session, will address the node calculated from:
+ // * the seed set on the server &
+ // * the session's context string set on the client
+}
+`}
+
+
+
+
+{`// Setting 'LoadBalanceBehavior' on the server by sending an operation:
+using (documentStore)
+{
+ // Define the client configuration to put on the server
+ var configurationToSave = new ClientConfiguration
+ {
+ // Enable the session-context feature
+ // If this is not enabled then a context string set in a session will be ignored
+ LoadBalanceBehavior = LoadBalanceBehavior.UseSessionContext,
+
+ // Set a seed
+ // The seed is 0 by default, provide any number to override
+ LoadBalancerContextSeed = 10,
+
+ // NOTE:
+ // The session's context string is Not set on the server
+ // You still need to set it on the client:
+ // * either as a convention on the document store
+ // * or pass it to 'SetContext' method on the session
+
+ // Configuration will be in effect when Disabled is set to false
+ Disabled = false
+ };
+
+ // Define the put configuration operation for ALL databases
+ var putConfigurationOp = new PutServerWideClientConfigurationOperation(configurationToSave);
+
+ // Execute the operation by passing it to Maintenance.Server.Send
+ documentStore.Maintenance.Server.Send(putConfigurationOp);
+
+ // After the operation has executed:
+ // all Read & Write requests, per session, will address the node calculated from:
+ // * the seed set on the server &
+ // * the session's context string set on the client
+}
+`}
+
+
+
+#### Set LoadBalanceBehavior on the server - from Studio:
+
+* The `LoadBalanceBehavior` configuration can be set from the Studio's [Client Configuration view](../../../studio/database/settings/client-configuration-per-database.mdx).
+ Setting it from the Studio will set this configuration directly **on the server**.
+
+* Once configuration on the server has changed, the running client will get updated with the new settings.
+ See [keeping client up-to-date](../../../client-api/configuration/load-balance/overview.mdx#keeping-the-client-topology-up-to-date).
+
+
+
+## When to use
+
+* Distributing _Read & Write_ requests among the cluster nodes can be beneficial
+ when a set of sessions handle a specific set of documents or similar data.
+ Load balancing can be achieved by routing requests from the sessions that handle similar topics to the same node, while routing other sessions to other nodes.
+
+* Another usage example can be setting the session's context to be the current user.
+ Thus spreading the _Read & Write_ requests per user that logs into the application.
+
+* Once setting the load balance to be per session-context,
+ in the case when detecting that many or all sessions send requests to the same node,
+ a further level of node randomization can be added by changing the seed.
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/configuration/load-balance/_load-balance-behavior-nodejs.mdx b/versioned_docs/version-7.1/client-api/configuration/load-balance/_load-balance-behavior-nodejs.mdx
new file mode 100644
index 0000000000..99039ddfb6
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/configuration/load-balance/_load-balance-behavior-nodejs.mdx
@@ -0,0 +1,257 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* The `loadBalanceBehavior` configuration allows you to specify which sessions should
+ communicate with the same node.
+
+* Sessions that are assigned the **same context** will have all their _Read_ & _Write_
+ requests routed to the **same node**. Gain load balancing by assigning **different contexts**
+ to **different sessions**.
+* In this page:
+ * [LoadBalanceBehavior options](../../../client-api/configuration/load-balance/load-balance-behavior.mdx#loadbalancebehavior-options)
+ * [Initialize LoadBalanceBehavior on the client](../../../client-api/configuration/load-balance/load-balance-behavior.mdx#initialize-loadbalancebehavior-on-the-client)
+ * [Set LoadBalanceBehavior on the server:](../../../client-api/configuration/load-balance/load-balance-behavior.mdx#set-loadbalancebehavior-on-the-server)
+ * [By operation](../../../client-api/configuration/load-balance/load-balance-behavior.mdx#set-loadbalancebehavior-on-the-server---by-operation)
+ * [From Studio](../../../client-api/configuration/load-balance/load-balance-behavior.mdx#set-loadbalancebehavior-on-the-server---from-studio)
+ * [When to use](../../../client-api/configuration/load-balance/load-balance-behavior.mdx#when-to-use)
+
+
+## LoadBalanceBehavior options
+
+### `None` (default option)
+
+* Requests will be handled based on the `readBalanceBehavior` configuration.
+ See the conditional flow described in [Client logic for choosing a node](../../../client-api/configuration/load-balance/overview.mdx#client-logic-for-choosing-a-node).
+ * **_Read_** requests:
+ The client will calculate the target node from the configured [readBalanceBehavior Option](../../../client-api/configuration/load-balance/read-balance-behavior.mdx#readbalancebehavior-options).
+ * **_Write_** requests:
+ Will be sent to the [preferred node](../../../client-api/configuration/load-balance/overview.mdx#the-preferred-node).
+ The data will then be replicated to all the other nodes in the database group.
+### `UseSessionContext`
+
+* **Load-balance**
+
+ * When this option is enabled, the client will calculate the target node from the session-id.
+ The session-id is hashed from a **context string** and an optional **seed** given by the user.
+ The context string together with the seed are referred to as **"The session context"**.
+
+ * Per session, the client will select a node from the topology list based on this session-context.
+ So sessions that use the **same** context will target the **same** node.
+
+ * All **_Read & Write_** requests made on the session (i.e a query or a load request, etc.)
+ will address this calculated node.
+ _Read & Write_ requests that are made on the store (i.e. executing an [operation](../../../client-api/operations/what-are-operations.mdx))
+ will go to the preferred node.
+
+ * All _Write_ requests will be replicated to all the other nodes in the database group as usual.
+
+* **Failover**
+
+ * In case of a failure, the client will try to access the next node from the topology nodes list.
+
+
+
+## Initialize loadBalanceBehavior on the client
+
+* The `loadBalanceBehavior` convention can be set **on the client** when initializing the Document Store.
+ This will set the load balance behavior for the default database that is set on the store.
+
+* This setting can be **overriden** by setting 'loadBalanceBehavior' on the server, see [below](../../../client-api/configuration/load-balance/load-balance-behavior.mdx#set-loadbalancebehavior-on-the-server).
+**Initialize conventions**:
+
+
+
+{`// Initialize 'loadBalanceBehavior' on the client:
+// ===============================================
+
+const documentStore = new DocumentStore(["serverUrl_1", "serverUrl_2", "..."], "DefaultDB");
+
+// Enable the session-context feature
+// If this is not enabled then a context string set in a session will be ignored
+documentStore.conventions.loadBalanceBehavior = "UseSessionContext";
+
+// Assign a method that sets the default context string
+// This string will be used for sessions that do Not provide a context string
+// A sample getDefaultContext method is defined below
+documentStore.conventions.loadBalancerPerSessionContextSelector = getDefaultContext;
+
+// Set a seed
+// The seed is 0 by default, provide any number to override
+documentStore.conventions.loadBalancerContextSeed = 5
+
+documentStore.initialize();
+`}
+
+
+
+
+{`// A customized method for getting a default context string
+const getDefaultContext = (dbName) => \{
+ // Method is invoked by RavenDB with the database name
+ // Use that name - or return any string of your choice
+ return "defaultContextString";
+\}
+`}
+
+
+**Session usage**:
+
+
+
+{`// Open a session that will use the DEFAULT store values:
+const session = documentStore.openSession();
+
+// For all Read & Write requests made in this session,
+// node to access is calculated from string & seed values defined on the store
+const employee = await session.load("employees/1-A");
+`}
+
+
+
+
+{`// Open a session that will use a UNIQUE context string:
+const session = documentStore.openSession();
+
+// Call setContext, pass a unique context string for this session
+session.advanced.sessionInfo.setContext("SomeOtherContext");
+
+// For all Read & Write requests made in this session,
+// node to access is calculated from the unique string & the seed defined on the store
+const employee = await session.load("employees/1-A");
+`}
+
+
+
+
+
+## Set loadBalanceBehavior on the server
+
+
+
+**Note**:
+
+* Setting the load balance behavior on the server, either by an **Operation** or from the **Studio**,
+ only 'enables the feature' and sets the seed.
+
+* For the feature to be in effect, you still need to define the context string itself:
+ * either per session, call `session.advanced.sessionInfo.setContext`
+ * or, on the document store, set a default value for - `loadBalancerPerSessionContextSelector`
+
+
+#### Set LoadBalanceBehavior on the server - by operation:
+
+* The `loadBalanceBehavior` configuration can be set **on the server** by sending an [operation](../../../client-api/operations/what-are-operations.mdx).
+
+* The operation can modify the default database only, or all databases - see examples below.
+
+* Once configuration on the server has changed, the running client will get updated with the new settings.
+ See [keeping client up-to-date](../../../client-api/configuration/load-balance/overview.mdx#keeping-the-client-topology-up-to-date).
+
+
+
+
+{`// Setting 'loadBalanceBehavior' on the server by sending an operation:
+// ====================================================================
+
+// Define the client configuration to put on the server
+const configurationToSave = {
+ // Enable the session-context feature
+ // If this is not enabled then a context string set in a session will be ignored
+ loadBalanceBehavior: "UseSessionContext",
+
+ // Set a seed
+ // The seed is 0 by default, provide any number to override
+ loadBalancerContextSeed: 10,
+
+ // NOTE:
+ // The session's context string is Not set on the server
+ // You still need to set it on the client:
+ // * either as a convention on the document store
+ // * or pass it to 'setContext' method on the session
+
+ // Configuration will be in effect when 'disabled' is set to false
+ disabled: false
+};
+
+// Define the put configuration operation for the DEFAULT database
+const putConfigurationOp = new PutClientConfigurationOperation(configurationToSave);
+
+// Execute the operation by passing it to maintenance.send
+await documentStore.maintenance.send(putConfigurationOp);
+
+// After the operation has executed:
+// all Read & Write requests, per session, will address the node calculated from:
+// * the seed set on the server &
+// * the session's context string set on the client
+`}
+
+
+
+
+{`// Setting 'loadBalanceBehavior' on the server by sending an operation:
+// ====================================================================
+
+// Define the client configuration to put on the server
+const configurationToSave = {
+ // Enable the session-context feature
+ // If this is not enabled then a context string set in a session will be ignored
+ loadBalanceBehavior: "UseSessionContext",
+
+ // Set a seed
+ // The seed is 0 by default, provide any number to override
+ loadBalancerContextSeed: 10,
+
+ // NOTE:
+ // The session's context string is Not set on the server
+ // You still need to set it on the client:
+ // * either as a convention on the document store
+ // * or pass it to 'setContext' method on the session
+
+ // Configuration will be in effect when 'disabled' is set to false
+ disabled: false
+};
+
+// Define the put configuration operation for ALL databases
+const putConfigurationOp = new PutServerWideClientConfigurationOperation(configurationToSave);
+
+// Execute the operation by passing it to maintenance.server.send
+await documentStore.maintenance.server.send(putConfigurationOp);
+
+// After the operation has executed:
+// all Read & Write requests, per session, will address the node calculated from:
+// * the seed set on the server &
+// * the session's context string set on the client
+`}
+
+
+
+#### Set LoadBalanceBehavior on the server - from Studio:
+
+* The `loadBalanceBehavior` configuration can be set from the Studio's [Client Configuration view](../../../studio/database/settings/client-configuration-per-database.mdx).
+ Setting it from the Studio will set this configuration directly **on the server**.
+
+* Once configuration on the server has changed, the running client will get updated with the new settings.
+ See [keeping client up-to-date](../../../client-api/configuration/load-balance/overview.mdx#keeping-the-client-topology-up-to-date).
+
+
+
+## When to use
+
+* Distributing _Read & Write_ requests among the cluster nodes can be beneficial
+ when a set of sessions handle a specific set of documents or similar data.
+ Load balancing can be achieved by routing requests from the sessions that handle similar topics to the same node, while routing other sessions to other nodes.
+
+* Another usage example can be setting the session's context to be the current user.
+ Thus spreading the _Read & Write_ requests per user that logs into the application.
+
+* Once setting the load balance to be per session-context,
+ in the case when detecting that many or all sessions send requests to the same node,
+ a further level of node randomization can be added by changing the seed.
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/configuration/load-balance/_load-balance-behavior-php.mdx b/versioned_docs/version-7.1/client-api/configuration/load-balance/_load-balance-behavior-php.mdx
new file mode 100644
index 0000000000..98985c2fbb
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/configuration/load-balance/_load-balance-behavior-php.mdx
@@ -0,0 +1,271 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* The `loadBalanceBehavior` configuration allows you to specify which sessions should
+ communicate with the same node.
+
+* Sessions that are assigned the **same context** will have all their _Read_ & _Write_
+ requests routed to the **same node**. Gain load balancing by assigning **different contexts**
+ to **different sessions**.
+* In this page:
+ * [LoadBalanceBehavior options](../../../client-api/configuration/load-balance/load-balance-behavior.mdx#loadbalancebehavior-options)
+ * [Initialize LoadBalanceBehavior on the client](../../../client-api/configuration/load-balance/load-balance-behavior.mdx#initialize-loadbalancebehavior-on-the-client)
+ * [Set LoadBalanceBehavior on the server:](../../../client-api/configuration/load-balance/load-balance-behavior.mdx#set-loadbalancebehavior-on-the-server)
+ * [By operation](../../../client-api/configuration/load-balance/load-balance-behavior.mdx#set-loadbalancebehavior-on-the-server---by-operation)
+ * [From Studio](../../../client-api/configuration/load-balance/load-balance-behavior.mdx#set-loadbalancebehavior-on-the-server---from-studio)
+ * [When to use](../../../client-api/configuration/load-balance/load-balance-behavior.mdx#when-to-use)
+
+
+## LoadBalanceBehavior options
+
+### `None` (default option)
+
+* Requests will be handled based on the `ReadBalanceBehavior` configuration.
+ See the conditional flow described in [Client logic for choosing a node](../../../client-api/configuration/load-balance/overview.mdx#client-logic-for-choosing-a-node).
+ * **_Read_** requests:
+ The client will calculate the target node from the configured [ReadBalanceBehavior Option](../../../client-api/configuration/load-balance/read-balance-behavior.mdx#readbalancebehavior-options).
+ * **_Write_** requests:
+ Will be sent to the [preferred node](../../../client-api/configuration/load-balance/overview.mdx#the-preferred-node).
+ The data will then be replicated to all the other nodes in the database group.
+### `UseSessionContext`
+
+* **Load-balance**
+
+ * When this option is enabled, the client will calculate the target node from the session-id.
+ The session-id is hashed from a **context string** and an optional **seed** given by the user.
+ The context string together with the seed are referred to as **"The session context"**.
+
+ * Per session, the client will select a node from the topology list based on this session-context.
+ So sessions that use the **same** context will target the **same** node.
+
+ * All **_Read & Write_** requests made on the session (i.e a query or a load request, etc.)
+ will address this calculated node.
+ _Read & Write_ requests that are made on the store (i.e. executing an [operation](../../../client-api/operations/what-are-operations.mdx))
+ will go to the preferred node.
+
+ * All _Write_ requests will be replicated to all the other nodes in the database group as usual.
+
+* **Failover**
+
+ * In case of a failure, the client will try to access the next node from the topology nodes list.
+
+
+
+## Initialize LoadBalanceBehavior on the client
+
+* The `LoadBalanceBehavior` convention can be set **on the client** when initializing the Document Store.
+ This will set the load balance behavior for the default database that is set on the store.
+
+* This setting can be **overriden** by setting 'LoadBalanceBehavior' on the server, see [below](../../../client-api/configuration/load-balance/load-balance-behavior.mdx#set-loadbalancebehavior-on-the-server).
+**Initialize conventions**:
+
+
+
+{`// Initialize 'LoadBalanceBehavior' on the client:
+$documentStore = new DocumentStore(["ServerURL_1", "ServerURL_2", "..."], "DefaultDB");
+
+$conventions = new DocumentConventions();
+// Enable the session-context feature
+// If this is not enabled then a context string set in a session will be ignored
+$conventions->setLoadBalanceBehavior(LoadBalanceBehavior::useSessionContext());
+
+
+// Assign a method that sets the default context string
+// This string will be used for sessions that do Not provide a context string
+// A sample GetDefaultContext method is defined below
+$conventions->setLoadBalancerPerSessionContextSelector(\\Closure::fromCallable([$this, 'GetDefaultContext']));
+
+// Set a seed
+// The seed is 0 by default, provide any number to override
+$conventions->setLoadBalancerContextSeed(5);
+
+$documentStore->setConventions($conventions);
+$documentStore->initialize();
+`}
+
+
+
+
+{`// A customized method for getting a default context string
+private function GetDefaultContext(string $dbName): string
+\{
+ // Method is invoked by RavenDB with the database name
+ // Use that name - or return any string of your choice
+ return "DefaultContextString";
+\}
+`}
+
+
+**Session usage**:
+
+
+
+{`// Open a session that will use the DEFAULT store values:
+$session = $documentStore->openSession();
+try \{
+ // For all Read & Write requests made in this session,
+ // node to access is calculated from string & seed values defined on the store
+ $employee = $session->load(Employee::class, "employees/1-A");
+\} finally \{
+ $session->close();
+\}
+`}
+
+
+
+
+{`// Open a session that will use a UNIQUE context string:
+$session = $documentStore->openSession();
+try \{
+ // Call SetContext, pass a unique context string for this session
+ $session->advanced()->getSessionInfo()->setContext("SomeOtherContext");
+
+ // For all Read & Write requests made in this session,
+ // node to access is calculated from the unique string & the seed defined on the store
+ $employee = $session->load(Employee::class, "employees/1-A");
+\} finally \{
+ $session->close();
+\}
+`}
+
+
+
+
+
+## Set LoadBalanceBehavior on the server
+
+
+
+**Note**:
+
+* Setting the load balance behavior on the server, either by an **Operation** or from the **Studio**,
+ only 'enables the feature' and sets the seed.
+
+* For the feature to be in effect, you still need to define the context string itself:
+ * either, per session, call the advanced `setContext` method
+ * or, set a default document store value using `setLoadBalancerPerSessionContextSelector`
+
+
+#### Set LoadBalanceBehavior on the server - by operation:
+
+* The `LoadBalanceBehavior` configuration can be set **on the server** by sending an [operation](../../../client-api/operations/what-are-operations.mdx).
+
+* The operation can modify the default database only, or all databases - see examples below.
+
+* Once configuration on the server has changed, the running client will get updated with the new settings.
+ See [keeping client up-to-date](../../../client-api/configuration/load-balance/overview.mdx#keeping-the-client-topology-up-to-date).
+
+
+
+
+{`// Setting 'LoadBalanceBehavior' on the server by sending an operation:
+$documentStore = new DocumentStore();
+try {
+ // Define the client configuration to put on the server
+ $configurationToSave = new ClientConfiguration();
+ // Enable the session-context feature
+ // If this is not enabled then a context string set in a session will be ignored
+ $configurationToSave->setLoadBalanceBehavior(LoadBalanceBehavior::useSessionContext());
+
+ // Set a seed
+ // The seed is 0 by default, provide any number to override
+ $configurationToSave->setLoadBalancerContextSeed(10);
+
+ // NOTE:
+ // The session's context string is Not set on the server
+ // You still need to set it on the client:
+ // * either as a convention on the document store
+ // * or pass it to 'SetContext' method on the session
+
+ // Configuration will be in effect when Disabled is set to false
+ $configurationToSave->setDisabled(false);
+
+
+ // Define the put configuration operation for the DEFAULT database
+ $putConfigurationOp = new PutClientConfigurationOperation($configurationToSave);
+
+ // Execute the operation by passing it to Maintenance.Send
+ $documentStore->maintenance()->send($putConfigurationOp);
+
+ // After the operation has executed:
+ // all Read & Write requests, per session, will address the node calculated from:
+ // * the seed set on the server &
+ // * the session's context string set on the client
+} finally {
+ $documentStore->close();
+}
+`}
+
+
+
+
+{`// Setting 'LoadBalanceBehavior' on the server by sending an operation:
+$documentStore = new DocumentStore();
+try {
+ // Define the client configuration to put on the server
+ $configurationToSave = new ClientConfiguration();
+ // Enable the session-context feature
+ // If this is not enabled then a context string set in a session will be ignored
+ $configurationToSave->setLoadBalanceBehavior(LoadBalanceBehavior::useSessionContext());
+
+ // Set a seed
+ // The seed is 0 by default, provide any number to override
+ $configurationToSave->setLoadBalancerContextSeed(10);
+
+ // NOTE:
+ // The session's context string is Not set on the server
+ // You still need to set it on the client:
+ // * either as a convention on the document store
+ // * or pass it to 'SetContext' method on the session
+
+ // Configuration will be in effect when Disabled is set to false
+ $configurationToSave->setDisabled(false);
+
+
+ // Define the put configuration operation for ALL databases
+ $putConfigurationOp = new PutServerWideClientConfigurationOperation($configurationToSave);
+
+ // Execute the operation by passing it to Maintenance.Server.Send
+ $documentStore->maintenance()->server()->send($putConfigurationOp);
+
+ // After the operation has executed:
+ // all Read & Write requests, per session, will address the node calculated from:
+ // * the seed set on the server &
+ // * the session's context string set on the client
+} finally {
+ $documentStore->close();
+}
+`}
+
+
+
+#### Set LoadBalanceBehavior on the server - from Studio:
+
+* The `LoadBalanceBehavior` configuration can be set from the Studio's [Client Configuration view](../../../studio/database/settings/client-configuration-per-database.mdx).
+ Setting it from the Studio will set this configuration directly **on the server**.
+
+* Once configuration on the server has changed, the running client will get updated with the new settings.
+ See [keeping client up-to-date](../../../client-api/configuration/load-balance/overview.mdx#keeping-the-client-topology-up-to-date).
+
+
+
+## When to use
+
+* Distributing _Read & Write_ requests among the cluster nodes can be beneficial
+ when a set of sessions handle a specific set of documents or similar data.
+ Load balancing can be achieved by routing requests from the sessions that handle similar topics to the same node, while routing other sessions to other nodes.
+
+* Another usage example can be setting the session's context to be the current user.
+ Thus spreading the _Read & Write_ requests per user that logs into the application.
+
+* Once setting the load balance to be per session-context,
+ in the case when detecting that many or all sessions send requests to the same node,
+ a further level of node randomization can be added by changing the seed.
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/configuration/load-balance/_load-balance-behavior-python.mdx b/versioned_docs/version-7.1/client-api/configuration/load-balance/_load-balance-behavior-python.mdx
new file mode 100644
index 0000000000..0f27437fa9
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/configuration/load-balance/_load-balance-behavior-python.mdx
@@ -0,0 +1,252 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* The `loadBalanceBehavior` configuration allows you to specify which sessions should
+ communicate with the same node.
+
+* Sessions that are assigned the **same context** will have all their _Read_ & _Write_
+ requests routed to the **same node**. Gain load balancing by assigning **different contexts**
+ to **different sessions**.
+* In this page:
+ * [LoadBalanceBehavior options](../../../client-api/configuration/load-balance/load-balance-behavior.mdx#loadbalancebehavior-options)
+ * [Initialize LoadBalanceBehavior on the client](../../../client-api/configuration/load-balance/load-balance-behavior.mdx#initialize-loadbalancebehavior-on-the-client)
+ * [Set LoadBalanceBehavior on the server:](../../../client-api/configuration/load-balance/load-balance-behavior.mdx#set-loadbalancebehavior-on-the-server)
+ * [By operation](../../../client-api/configuration/load-balance/load-balance-behavior.mdx#set-loadbalancebehavior-on-the-server---by-operation)
+ * [From Studio](../../../client-api/configuration/load-balance/load-balance-behavior.mdx#set-loadbalancebehavior-on-the-server---from-studio)
+ * [When to use](../../../client-api/configuration/load-balance/load-balance-behavior.mdx#when-to-use)
+
+
+## LoadBalanceBehavior options
+
+### `None` (default option)
+
+* Requests will be handled based on the `ReadBalanceBehavior` configuration.
+ See the conditional flow described in [Client logic for choosing a node](../../../client-api/configuration/load-balance/overview.mdx#client-logic-for-choosing-a-node).
+ * **_Read_** requests:
+ The client will calculate the target node from the configured [ReadBalanceBehavior Option](../../../client-api/configuration/load-balance/read-balance-behavior.mdx#readbalancebehavior-options).
+ * **_Write_** requests:
+ Will be sent to the [preferred node](../../../client-api/configuration/load-balance/overview.mdx#the-preferred-node).
+ The data will then be replicated to all the other nodes in the database group.
+### `UseSessionContext`
+
+* **Load-balance**
+
+ * When this option is enabled, the client will calculate the target node from the session-id.
+ The session-id is hashed from a **context string** and an optional **seed** given by the user.
+ The context string together with the seed are referred to as **"The session context"**.
+
+ * Per session, the client will select a node from the topology list based on this session-context.
+ So sessions that use the **same** context will target the **same** node.
+
+ * All **_Read & Write_** requests made on the session (i.e a query or a load request, etc.)
+ will address this calculated node.
+ _Read & Write_ requests that are made on the store (i.e. executing an [operation](../../../client-api/operations/what-are-operations.mdx))
+ will go to the preferred node.
+
+ * All _Write_ requests will be replicated to all the other nodes in the database group as usual.
+
+* **Failover**
+
+ * In case of a failure, the client will try to access the next node from the topology nodes list.
+
+
+
+## Initialize LoadBalanceBehavior on the client
+
+* The `LoadBalanceBehavior` convention can be set **on the client** when initializing the Document Store.
+ This will set the load balance behavior for the default database that is set on the store.
+
+* This setting can be **overriden** by setting 'LoadBalanceBehavior' on the server, see [below](../../../client-api/configuration/load-balance/load-balance-behavior.mdx#set-loadbalancebehavior-on-the-server).
+**Initialize conventions**:
+
+
+
+{`# Initialize 'LoadBalanceBehavior' on the client:
+document_store = DocumentStore(
+ urls=["ServerURL_1", "ServerURL_2", "..."],
+ database="DefaultDB",
+)
+conventions = DocumentConventions()
+
+# Enable the session-context feature
+# If this is not enabled then a context string set in a session will be ignored
+conventions.load_balance_behavior = LoadBalanceBehavior.USE_SESSION_CONTEXT
+
+# Assign a method that sets the default context string
+# This string will be used for sessions that do Not provide a context string
+# A sample GetDefaultContext method is defined below
+conventions.load_balancer_per_session_context_selector = get_default_context
+
+# Set a seed
+# The seed is 0 by default, provide any number to override
+conventions.load_balancer_context_seed = 5
+
+document_store.conventions = conventions
+document_store.initialize()
+`}
+
+
+
+
+{`# A customized method for getting a default context string
+def get_default_context(self, db_name: str) -> str:
+ # Method is invoked by RavenDB with the database name
+ # Use that name - or return any string of your choice
+ return "DefaultContextString"
+`}
+
+
+**Session usage**:
+
+
+
+{`# Open a session that will use the DEFAULT store values:
+with document_store.open_session() as session:
+ # For all Read & Write requests made in this session
+ # node to access is calculated from string & seed values defined on the store
+ employee = session.load("employees/1-A", Employee)
+`}
+
+
+
+
+{`# Open a session that will use a UNIQUE context string:
+with document_store.open_session() as session:
+ # Call context, pass a unique context string for this session
+ session.advanced.session_info.context = "SomeOtherContext"
+
+ # For all Read & Write requests made in this session,
+ # node to access is calculated from the unique string & the seed defined on the store
+ employee = session.load("employees/1-A", Employee)
+`}
+
+
+
+
+
+## Set LoadBalanceBehavior on the server
+
+
+
+**Note**:
+
+* Setting the load balance behavior on the server, either by an **Operation** or from the **Studio**,
+ only 'enables the feature' and sets the seed.
+
+* For the feature to be in effect, you still need to define the context string itself:
+ * either per session, call `session.advanced.session_info.context`
+ * or, on the document store, set a default value for - `load_balancer_per_session_context_selector`
+
+
+#### Set LoadBalanceBehavior on the server - by operation:
+
+* The `LoadBalanceBehavior` configuration can be set **on the server** by sending an [operation](../../../client-api/operations/what-are-operations.mdx).
+
+* The operation can modify the default database only, or all databases - see examples below.
+
+* Once configuration on the server has changed, the running client will get updated with the new settings.
+ See [keeping client up-to-date](../../../client-api/configuration/load-balance/overview.mdx#keeping-the-client-topology-up-to-date).
+
+
+
+
+{`# Setting 'LoadBalanceBehavior' on the server by sending an operation:
+with document_store:
+ # Define the client configuration to put on the server
+ configuration_to_save = ClientConfiguration()
+ # Enable the session-context feature
+ # If this is not enabled then a context string set in a session will be ignored
+ configuration_to_save.load_balance_behavior = LoadBalanceBehavior.USE_SESSION_CONTEXT
+
+ # Set a seed
+ # The seed is 0 by default, provide any number to override
+ load_balancer_context_seed = 10
+
+ # NOTE:
+ # The session's context string is Not set on the server
+ # You still need to set it on the client:
+ # * either as a convention on the document store
+ # * or pass it to the 'context' method on the session
+
+ # Configuration will be in effect when Disabled is set to false
+ configuration_to_save.disabled = False
+
+ # Define the put configuration operation for the DEFAULT database
+ put_configuration_op = PutClientConfigurationOperation(configuration_to_save)
+
+ # Execute the operation by passing it to maintenance.send
+ document_store.maintenance.send(put_configuration_op)
+
+ # After the operation has executed:
+ # all Read & Write requests, per session, will address the node calculated from:
+ # * the seed set on the server &
+ # * the session's context string set on the client
+`}
+
+
+
+
+{`with document_store:
+ # Define the client configuration to put on the server
+ configuration_to_save = ClientConfiguration()
+ # Enable the session-context feature
+ # If this is not enabled then a context string set in a session will be ignored
+ configuration_to_save.load_balance_behavior = LoadBalanceBehavior.USE_SESSION_CONTEXT
+
+ # Set a seed
+ # The seed is 0 by default, provide any number to override
+ load_balancer_context_seed = 10
+
+ # NOTE:
+ # The session's context string is Not set on the server
+ # You still need to set it on the client:
+ # * either as a convention on the document store
+ # * or pass it to the 'context' method on the session
+
+ # Configuration will be in effect when Disabled is set to false
+ configuration_to_save.disabled = False
+
+ # Define the put configuration operation for ALL databases
+ put_configuration_op = PutServerWideClientConfigurationOperation(configuration_to_save)
+
+ # Execute the operation by passing it to maintenance.server.send
+ document_store.maintenance.server.send(put_configuration_op)
+
+ # After the operation has executed:
+ # all Read & Write requests, per session, will address the node calculated from:
+ # * the seed set on the server &
+ # * the session's context string set on the client
+`}
+
+
+
+#### Set LoadBalanceBehavior on the server - from Studio:
+
+* The `LoadBalanceBehavior` configuration can be set from the Studio's [Client Configuration view](../../../studio/database/settings/client-configuration-per-database.mdx).
+ Setting it from the Studio will set this configuration directly **on the server**.
+
+* Once configuration on the server has changed, the running client will get updated with the new settings.
+ See [keeping client up-to-date](../../../client-api/configuration/load-balance/overview.mdx#keeping-the-client-topology-up-to-date).
+
+
+
+## When to use
+
+* Distributing _Read & Write_ requests among the cluster nodes can be beneficial
+ when a set of sessions handle a specific set of documents or similar data.
+ Load balancing can be achieved by routing requests from the sessions that handle similar topics to the same node, while routing other sessions to other nodes.
+
+* Another usage example can be setting the session's context to be the current user.
+ Thus spreading the _Read & Write_ requests per user that logs into the application.
+
+* Once setting the load balance to be per session-context,
+ in the case when detecting that many or all sessions send requests to the same node,
+ a further level of node randomization can be added by changing the seed.
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/configuration/load-balance/_read-balance-behavior-csharp.mdx b/versioned_docs/version-7.1/client-api/configuration/load-balance/_read-balance-behavior-csharp.mdx
new file mode 100644
index 0000000000..dec0f7e66e
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/configuration/load-balance/_read-balance-behavior-csharp.mdx
@@ -0,0 +1,169 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* When set, the `ReadBalanceBehavior` configuration will be in effect according to the
+ conditional flow described in [Client logic for choosing a node](../../../client-api/configuration/load-balance/overview.mdx#client-logic-for-choosing-a-node).
+
+* Once configuration is in effect then:
+ * **_Read_** requests - will be sent to the node determined by the configured option - see below.
+ * **_Write_** requests - are always sent to the preferred node.
+ The data will then be replicated to all the other nodes in the database group.
+ * Upon a node failure, the node to failover to is also determined by the defined option.
+
+* In this page:
+ * [ReadBalanceBehavior options](../../../client-api/configuration/load-balance/read-balance-behavior.mdx#readbalancebehavior-options)
+ * [Initialize ReadBalanceBehavior on the client](../../../client-api/configuration/load-balance/read-balance-behavior.mdx#initialize-readbalancebehavior-on-the-client)
+ * [Set ReadBalanceBehavior on the server:](../../../client-api/configuration/load-balance/read-balance-behavior.mdx#set-readbalancebehavior-on-the-server)
+ * [By operation](../../../client-api/configuration/load-balance/read-balance-behavior.mdx#set-readbalancebehavior-on-the-server---by-operation)
+ * [From Studio](../../../client-api/configuration/load-balance/read-balance-behavior.mdx#set-readbalancebehavior-on-the-server---from-studio)
+ * [When to use](../../../client-api/configuration/load-balance/read-balance-behavior.mdx#when-to-use)
+
+
+## readBalanceBehavior options
+
+### `None` (default option)
+
+ * **Read-balance**
+ No read balancing will occur.
+ The client will always send _Read_ requests to the [preferred node](../../../client-api/configuration/load-balance/overview.mdx#the-preferred-node).
+ * **Failover**
+ The client will failover nodes in the order they appear in the [topology nodes list](../../../studio/database/settings/manage-database-group.mdx#database-group-topology---actions).
+### `RoundRobin`
+
+* **Read-balance**
+ * Each session opened is assigned an incremental session-id number.
+ **Per session**, the client will select the next node from the topology list based on this internal session-id.
+ * All _Read_ requests made on the session (i.e a query or a load request, etc.)
+ will address the calculated node.
+ * A _Read_ request that is made on the store (i.e. executing an [operation](../../../client-api/operations/what-are-operations.mdx))
+ will go to the preferred node.
+* **Failover**
+ In case of a failure, the client will try the next node from the topology nodes list.
+### `FastestNode`
+
+ * **Read-balance**
+ All _Read_ requests will go to the fastest node.
+ The fastest node is determined by a [Speed Test](../../../client-api/cluster/speed-test.mdx).
+ * **Failover**
+ In case of a failure, a speed test will be triggered again,
+ and in the meantime the client will use the preferred node.
+
+
+
+## Initialize ReadBalanceBehavior on the client
+
+* The `ReadBalanceBehavior` convention can be set **on the client** when initializing the Document Store.
+ This will set the read balance behavior for the default database that is set on the store.
+
+* This setting can be **overriden** by setting 'ReadBalanceBehavior' on the server, see [below](../../../client-api/configuration/load-balance/read-balance-behavior.mdx#set-readbalancebehavior-on-the-server).
+
+
+
+{`// Initialize 'ReadBalanceBehavior' on the client:
+var documentStore = new DocumentStore
+\{
+ Urls = new[] \{ "ServerURL_1", "ServerURL_2", "..." \},
+ Database = "DefaultDB",
+ Conventions = new DocumentConventions
+ \{
+ // With ReadBalanceBehavior set to: 'FastestNode':
+ // Client READ requests will address the fastest node
+ // Client WRITE requests will address the preferred node
+ ReadBalanceBehavior = ReadBalanceBehavior.FastestNode
+ \}
+\}.Initialize();
+`}
+
+
+
+
+
+## Set ReadBalanceBehavior on the server
+
+#### Set ReadBalanceBehavior on the server - by operation:
+
+* The `ReadBalanceBehavior` configuration can be set **on the server** by sending an [operation](../../../client-api/operations/what-are-operations.mdx).
+
+* The operation can modify the default database only, or all databases - see examples below.
+
+* Once configuration on the server has changed, the running client will get updated with the new settings.
+ See [keeping client up-to-date](../../../client-api/configuration/load-balance/overview.mdx#keeping-the-client-topology-up-to-date).
+
+
+
+
+{`// Setting 'ReadBalanceBehavior' on the server by sending an operation:
+using (documentStore)
+{
+ // Define the client configuration to put on the server
+ var clientConfiguration = new ClientConfiguration
+ {
+ // Replace 'FastestNode' (from example above) with 'RoundRobin'
+ ReadBalanceBehavior = ReadBalanceBehavior.RoundRobin
+ };
+
+ // Define the put configuration operation for the DEFAULT database
+ var putConfigurationOp = new PutClientConfigurationOperation(clientConfiguration);
+
+ // Execute the operation by passing it to Maintenance.Send
+ documentStore.Maintenance.Send(putConfigurationOp);
+
+ // After the operation has executed:
+ // All WRITE requests will continue to address the preferred node
+ // READ requests, per session, will address a different node based on the RoundRobin logic
+}
+`}
+
+
+
+
+{`// Setting 'ReadBalanceBehavior' on the server by sending an operation:
+using (documentStore)
+{
+ // Define the client configuration to put on the server
+ var clientConfiguration = new ClientConfiguration
+ {
+ // Replace 'FastestNode' (from example above) with 'RoundRobin'
+ ReadBalanceBehavior = ReadBalanceBehavior.RoundRobin
+ };
+
+ // Define the put configuration operation for the ALL databases
+ var putConfigurationOp = new PutServerWideClientConfigurationOperation(clientConfiguration);
+
+ // Execute the operation by passing it to Maintenance.Server.Send
+ documentStore.Maintenance.Server.Send(putConfigurationOp);
+
+ // After the operation has executed:
+ // All WRITE requests will continue to address the preferred node
+ // READ requests, per session, will address a different node based on the RoundRobin logic
+}
+`}
+
+
+
+
+#### Set ReadBalanceBehavior on the server - from Studio:
+
+* The `ReadBalanceBehavior` configuration can be set from the Studio's [Client Configuration view](../../../studio/database/settings/client-configuration-per-database.mdx).
+ Setting it from the Studio will set this configuration directly **on the server**.
+
+* Once configuration on the server has changed, the running client will get updated with the new settings.
+ See [keeping client up-to-date](../../../client-api/configuration/load-balance/overview.mdx#keeping-the-client-topology-up-to-date).
+
+
+
+## When to use
+
+* Setting the read balance behavior is beneficial when you only care about distributing the _Read_ requests among the cluster nodes,
+ and when all _Write_ requests can go to the same node.
+
+* Using the 'FastestNode' option is beneficial when some nodes in the system are known to be faster than others,
+ thus letting the fastest node serve each read request.
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/configuration/load-balance/_read-balance-behavior-nodejs.mdx b/versioned_docs/version-7.1/client-api/configuration/load-balance/_read-balance-behavior-nodejs.mdx
new file mode 100644
index 0000000000..3d29583c2d
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/configuration/load-balance/_read-balance-behavior-nodejs.mdx
@@ -0,0 +1,163 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* When set, the `readBalanceBehavior` configuration will be in effect according to the
+ conditional flow described in [Client logic for choosing a node](../../../client-api/configuration/load-balance/overview.mdx#client-logic-for-choosing-a-node).
+
+* Once configuration is in effect then:
+ * **_Read_** requests - will be sent to the node determined by the configured option - see below.
+ * **_Write_** requests - are always sent to the preferred node.
+ The data will then be replicated to all the other nodes in the database group.
+ * Upon a node failure, the node to failover to is also determined by the defined option.
+
+* In this page:
+ * [readBalanceBehavior options](../../../client-api/configuration/load-balance/read-balance-behavior.mdx#readbalancebehavior-options)
+ * [Initialize readBalanceBehavior on the client](../../../client-api/configuration/load-balance/read-balance-behavior.mdx#initialize-readbalancebehavior-on-the-client)
+ * [Set readBalanceBehavior on the server:](../../../client-api/configuration/load-balance/read-balance-behavior.mdx#set-readbalancebehavior-on-the-server)
+ * [By operation](../../../client-api/configuration/load-balance/read-balance-behavior.mdx#set-readbalancebehavior-on-the-server---by-operation)
+ * [From Studio](../../../client-api/configuration/load-balance/read-balance-behavior.mdx#set-readbalancebehavior-on-the-server---from-studio)
+ * [When to use](../../../client-api/configuration/load-balance/read-balance-behavior.mdx#when-to-use)
+
+
+## readBalanceBehavior options
+
+### `None` (default option)
+
+ * **Read-balance**
+ No read balancing will occur.
+ The client will always send _Read_ requests to the [preferred node](../../../client-api/configuration/load-balance/overview.mdx#the-preferred-node).
+ * **Failover**
+ The client will failover nodes in the order they appear in the [topology nodes list](../../../studio/database/settings/manage-database-group.mdx#database-group-topology---actions).
+### `RoundRobin`
+
+* **Read-balance**
+ * Each session opened is assigned an incremental session-id number.
+ **Per session**, the client will select the next node from the topology list based on this internal session-id.
+ * All _Read_ requests made on the session (i.e a query or a load request, etc.)
+ will address the calculated node.
+ * A _Read_ request that is made on the store (i.e. executing an [operation](../../../client-api/operations/what-are-operations.mdx))
+ will go to the preferred node.
+* **Failover**
+ In case of a failure, the client will try the next node from the topology nodes list.
+### `FastestNode`
+
+ * **Read-balance**
+ All _Read_ requests will go to the fastest node.
+ The fastest node is determined by a [Speed Test](../../../client-api/cluster/speed-test.mdx).
+ * **Failover**
+ In case of a failure, a speed test will be triggered again,
+ and in the meantime the client will use the preferred node.
+
+
+
+## Initialize readBalanceBehavior on the client
+
+* The `readBalanceBehavior` convention can be set **on the client** when initializing the Document Store.
+ This will set the read balance behavior for the default database that is set on the store.
+
+* This setting can be **overriden** by setting 'readBalanceBehavior' on the server, see [below](../../../client-api/configuration/load-balance/read-balance-behavior.mdx#set-readbalancebehavior-on-the-server).
+
+
+
+{`// Initialize 'readBalanceBehavior' on the client:
+// ===============================================
+
+const documentStore = new DocumentStore(["serverUrl_1", "serverUrl_2", "..."], "DefaultDB");
+
+// For example:
+// With readBalanceBehavior set to: 'FastestNode':
+// Client READ requests will address the fastest node
+// Client WRITE requests will address the preferred node
+documentStore.conventions.readBalanceBehavior = "FastestNode";
+
+documentStore.initialize();
+`}
+
+
+
+
+
+## Set readBalanceBehavior on the server
+
+#### Set readBalanceBehavior on the server - by operation:
+
+* The `readBalanceBehavior` configuration can be set **on the server** by sending an [operation](../../../client-api/operations/what-are-operations.mdx).
+
+* The operation can modify the default database only, or all databases - see examples below.
+
+* Once configuration on the server has changed, the running client will get updated with the new settings.
+ See [keeping client up-to-date](../../../client-api/configuration/load-balance/overview.mdx#keeping-the-client-topology-up-to-date).
+
+
+
+
+{`// Setting 'readBalanceBehavior' on the server by sending an operation:
+// ====================================================================
+
+// Define the client configuration to put on the server
+const configurationToSave = {
+ // Replace 'FastestNode' (from example above) with 'RoundRobin'
+ readBalanceBehavior: "RoundRobin"
+};
+
+// Define the put configuration operation for the DEFAULT database
+const putConfigurationOp = new PutClientConfigurationOperation(configurationToSave);
+
+// Execute the operation by passing it to maintenance.send
+await documentStore.maintenance.send(putConfigurationOp);
+
+// After the operation has executed:
+// All WRITE requests will continue to address the preferred node
+// READ requests, per session, will address a different node based on the RoundRobin logic
+`}
+
+
+
+
+{`// Setting 'readBalanceBehavior' on the server by sending an operation:
+// ====================================================================
+
+// Define the client configuration to put on the server
+const configurationToSave = {
+ // Replace 'FastestNode' (from example above) with 'RoundRobin'
+ readBalanceBehavior: "RoundRobin"
+};
+
+// Define the put configuration operation for ALL databases
+const putConfigurationOp = new PutServerWideClientConfigurationOperation(configurationToSave);
+
+// Execute the operation by passing it to maintenance.server.send
+await documentStore.maintenance.server.send(putConfigurationOp);
+
+// After the operation has executed:
+// All WRITE requests will continue to address the preferred node
+// READ requests, per session, will address a different node based on the RoundRobin logic
+`}
+
+
+
+#### Set readBalanceBehavior on the server - from Studio:
+
+* The `readBalanceBehavior` configuration can be set from the Studio's [Client Configuration view](../../../studio/database/settings/client-configuration-per-database.mdx).
+ Setting it from the Studio will set this configuration directly **on the server**.
+
+* Once configuration on the server has changed, the running client will get updated with the new settings.
+ See [keeping client up-to-date](../../../client-api/configuration/load-balance/overview.mdx#keeping-the-client-topology-up-to-date).
+
+
+
+## When to use
+
+* Setting the read balance behavior is beneficial when you only care about distributing the _Read_ requests among the cluster nodes,
+ and when all _Write_ requests can go to the same node.
+
+* Using the 'FastestNode' option is beneficial when some nodes in the system are known to be faster than others,
+ thus letting the fastest node serve each read request.
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/configuration/load-balance/_read-balance-behavior-php.mdx b/versioned_docs/version-7.1/client-api/configuration/load-balance/_read-balance-behavior-php.mdx
new file mode 100644
index 0000000000..0eb01f15c2
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/configuration/load-balance/_read-balance-behavior-php.mdx
@@ -0,0 +1,168 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* When set, the `ReadBalanceBehavior` configuration will be in effect according to the
+ conditional flow described in [Client logic for choosing a node](../../../client-api/configuration/load-balance/overview.mdx#client-logic-for-choosing-a-node).
+
+* Once configuration is in effect then:
+ * **_Read_** requests - will be sent to the node determined by the configured option - see below.
+ * **_Write_** requests - are always sent to the preferred node.
+ The data will then be replicated to all the other nodes in the database group.
+ * Upon a node failure, the node to failover to is also determined by the defined option.
+
+* In this page:
+ * [ReadBalanceBehavior options](../../../client-api/configuration/load-balance/read-balance-behavior.mdx#readbalancebehavior-options)
+ * [Initialize ReadBalanceBehavior on the client](../../../client-api/configuration/load-balance/read-balance-behavior.mdx#initialize-readbalancebehavior-on-the-client)
+ * [Set ReadBalanceBehavior on the server:](../../../client-api/configuration/load-balance/read-balance-behavior.mdx#set-readbalancebehavior-on-the-server)
+ * [By operation](../../../client-api/configuration/load-balance/read-balance-behavior.mdx#set-readbalancebehavior-on-the-server---by-operation)
+ * [From Studio](../../../client-api/configuration/load-balance/read-balance-behavior.mdx#set-readbalancebehavior-on-the-server---from-studio)
+ * [When to use](../../../client-api/configuration/load-balance/read-balance-behavior.mdx#when-to-use)
+
+
+## readBalanceBehavior options
+
+### `None` (default option)
+
+ * **Read-balance**
+ No read balancing will occur.
+ The client will always send _Read_ requests to the [preferred node](../../../client-api/configuration/load-balance/overview.mdx#the-preferred-node).
+ * **Failover**
+ The client will failover nodes in the order they appear in the [topology nodes list](../../../studio/database/settings/manage-database-group.mdx#database-group-topology---actions).
+### `RoundRobin`
+
+* **Read-balance**
+ * Each session opened is assigned an incremental session-id number.
+ **Per session**, the client will select the next node from the topology list based on this internal session-id.
+ * All _Read_ requests made on the session (i.e a query or a load request, etc.)
+ will address the calculated node.
+ * A _Read_ request that is made on the store (i.e. executing an [operation](../../../client-api/operations/what-are-operations.mdx))
+ will go to the preferred node.
+* **Failover**
+ In case of a failure, the client will try the next node from the topology nodes list.
+### `FastestNode`
+
+ * **Read-balance**
+ All _Read_ requests will go to the fastest node.
+ The fastest node is determined by a [Speed Test](../../../client-api/cluster/speed-test.mdx).
+ * **Failover**
+ In case of a failure, a speed test will be triggered again,
+ and in the meantime the client will use the preferred node.
+
+
+
+## Initialize ReadBalanceBehavior on the client
+
+* The `ReadBalanceBehavior` convention can be set **on the client** when initializing the Document Store.
+ This will set the read balance behavior for the default database that is set on the store.
+
+* This setting can be **overriden** by setting 'ReadBalanceBehavior' on the server, see [below](../../../client-api/configuration/load-balance/read-balance-behavior.mdx#set-readbalancebehavior-on-the-server).
+
+
+
+{`// Initialize 'ReadBalanceBehavior' on the client:
+$documentStore = new DocumentStore(["ServerURL_1", "ServerURL_2", "..."], "DefaultDB");
+
+$conventions = new DocumentConventions();
+// With ReadBalanceBehavior set to: 'FastestNode':
+// Client READ requests will address the fastest node
+// Client WRITE requests will address the preferred node
+$conventions->setReadBalanceBehavior(ReadBalanceBehavior::fastestNode());
+
+$documentStore->setConventions($conventions);
+$documentStore->initialize();
+`}
+
+
+
+
+
+## Set ReadBalanceBehavior on the server
+
+#### Set ReadBalanceBehavior on the server - by operation:
+
+* The `ReadBalanceBehavior` configuration can be set **on the server** by sending an [operation](../../../client-api/operations/what-are-operations.mdx).
+
+* The operation can modify the default database only, or all databases - see examples below.
+
+* Once configuration on the server has changed, the running client will get updated with the new settings.
+ See [keeping client up-to-date](../../../client-api/configuration/load-balance/overview.mdx#keeping-the-client-topology-up-to-date).
+
+
+
+
+{`// Setting 'ReadBalanceBehavior' on the server by sending an operation:
+$documentStore = new DocumentStore();
+try {
+ // Define the client configuration to put on the server
+ $clientConfiguration = new ClientConfiguration();
+ // Replace 'FastestNode' (from example above) with 'RoundRobin'
+ $clientConfiguration->setReadBalanceBehavior(ReadBalanceBehavior::roundRobin());
+
+ // Define the put configuration operation for the DEFAULT database
+ $putConfigurationOp = new PutClientConfigurationOperation($clientConfiguration);
+
+ // Execute the operation by passing it to Maintenance.Send
+ $documentStore->maintenance()->send($putConfigurationOp);
+
+ // After the operation has executed:
+ // All WRITE requests will continue to address the preferred node
+ // READ requests, per session, will address a different node based on the RoundRobin logic
+} finally {
+ $documentStore->close();
+}
+`}
+
+
+
+
+{`// Setting 'ReadBalanceBehavior' on the server by sending an operation:
+$documentStore = new DocumentStore();
+try {
+ // Define the client configuration to put on the server
+ $clientConfiguration = new ClientConfiguration();
+
+ // Replace 'FastestNode' (from example above) with 'RoundRobin'
+ $clientConfiguration->setReadBalanceBehavior(ReadBalanceBehavior::roundRobin());
+
+ // Define the put configuration operation for the ALL databases
+ $putConfigurationOp = new PutServerWideClientConfigurationOperation($clientConfiguration);
+
+ // Execute the operation by passing it to Maintenance.Server.Send
+ $documentStore->maintenance()->server()->send($putConfigurationOp);
+
+ // After the operation has executed:
+ // All WRITE requests will continue to address the preferred node
+ // READ requests, per session, will address a different node based on the RoundRobin logic
+} finally {
+ $documentStore->close();
+}
+`}
+
+
+
+
+#### Set ReadBalanceBehavior on the server - from Studio:
+
+* The `ReadBalanceBehavior` configuration can be set from the Studio's [Client Configuration view](../../../studio/database/settings/client-configuration-per-database.mdx).
+ Setting it from the Studio will set this configuration directly **on the server**.
+
+* Once configuration on the server has changed, the running client will get updated with the new settings.
+ See [keeping client up-to-date](../../../client-api/configuration/load-balance/overview.mdx#keeping-the-client-topology-up-to-date).
+
+
+
+## When to use
+
+* Setting the read balance behavior is beneficial when you only care about distributing the _Read_ requests among the cluster nodes,
+ and when all _Write_ requests can go to the same node.
+
+* Using the 'FastestNode' option is beneficial when some nodes in the system are known to be faster than others,
+ thus letting the fastest node serve each read request.
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/configuration/load-balance/_read-balance-behavior-python.mdx b/versioned_docs/version-7.1/client-api/configuration/load-balance/_read-balance-behavior-python.mdx
new file mode 100644
index 0000000000..099d3883da
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/configuration/load-balance/_read-balance-behavior-python.mdx
@@ -0,0 +1,160 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* When set, the `ReadBalanceBehavior` configuration will be in effect according to the
+ conditional flow described in [Client logic for choosing a node](../../../client-api/configuration/load-balance/overview.mdx#client-logic-for-choosing-a-node).
+
+* Once configuration is in effect then:
+ * **_Read_** requests - will be sent to the node determined by the configured option - see below.
+ * **_Write_** requests - are always sent to the preferred node.
+ The data will then be replicated to all the other nodes in the database group.
+ * Upon a node failure, the node to failover to is also determined by the defined option.
+
+* In this page:
+ * [ReadBalanceBehavior options](../../../client-api/configuration/load-balance/read-balance-behavior.mdx#readbalancebehavior-options)
+ * [Initialize ReadBalanceBehavior on the client](../../../client-api/configuration/load-balance/read-balance-behavior.mdx#initialize-readbalancebehavior-on-the-client)
+ * [Set ReadBalanceBehavior on the server:](../../../client-api/configuration/load-balance/read-balance-behavior.mdx#set-readbalancebehavior-on-the-server)
+ * [By operation](../../../client-api/configuration/load-balance/read-balance-behavior.mdx#set-readbalancebehavior-on-the-server---by-operation)
+ * [From Studio](../../../client-api/configuration/load-balance/read-balance-behavior.mdx#set-readbalancebehavior-on-the-server---from-studio)
+ * [When to use](../../../client-api/configuration/load-balance/read-balance-behavior.mdx#when-to-use)
+
+
+## readBalanceBehavior options
+
+### `None` (default option)
+
+ * **Read-balance**
+ No read balancing will occur.
+ The client will always send _Read_ requests to the [preferred node](../../../client-api/configuration/load-balance/overview.mdx#the-preferred-node).
+ * **Failover**
+ The client will failover nodes in the order they appear in the [topology nodes list](../../../studio/database/settings/manage-database-group.mdx#database-group-topology---actions).
+### `RoundRobin`
+
+* **Read-balance**
+ * Each session opened is assigned an incremental session-id number.
+ **Per session**, the client will select the next node from the topology list based on this internal session-id.
+ * All _Read_ requests made on the session (i.e a query or a load request, etc.)
+ will address the calculated node.
+ * A _Read_ request that is made on the store (i.e. executing an [operation](../../../client-api/operations/what-are-operations.mdx))
+ will go to the preferred node.
+* **Failover**
+ In case of a failure, the client will try the next node from the topology nodes list.
+### `FastestNode`
+
+ * **Read-balance**
+ All _Read_ requests will go to the fastest node.
+ The fastest node is determined by a [Speed Test](../../../client-api/cluster/speed-test.mdx).
+ * **Failover**
+ In case of a failure, a speed test will be triggered again,
+ and in the meantime the client will use the preferred node.
+
+
+
+## Initialize ReadBalanceBehavior on the client
+
+* The `ReadBalanceBehavior` convention can be set **on the client** when initializing the Document Store.
+ This will set the read balance behavior for the default database that is set on the store.
+
+* This setting can be **overriden** by setting 'ReadBalanceBehavior' on the server, see [below](../../../client-api/configuration/load-balance/read-balance-behavior.mdx#set-readbalancebehavior-on-the-server).
+
+
+
+{`# Initialize 'ReadBalanceBehavior' on the client:
+document_store = DocumentStore(
+ urls=["ServerURL_1", "ServerURL_2", "..."],
+ database="DefaultDB",
+)
+conventions = DocumentConventions()
+# With ReadBalanceBehavior set to: 'FastestNode':
+# Client READ requests will address the fastest node
+# Client WRITE requests will address the preferred node
+conventions.read_balance_behavior = ReadBalanceBehavior.FASTEST_NODE
+
+document_store.conventions = conventions
+`}
+
+
+
+
+
+## Set ReadBalanceBehavior on the server
+
+#### Set ReadBalanceBehavior on the server - by operation:
+
+* The `ReadBalanceBehavior` configuration can be set **on the server** by sending an [operation](../../../client-api/operations/what-are-operations.mdx).
+
+* The operation can modify the default database only, or all databases - see examples below.
+
+* Once configuration on the server has changed, the running client will get updated with the new settings.
+ See [keeping client up-to-date](../../../client-api/configuration/load-balance/overview.mdx#keeping-the-client-topology-up-to-date).
+
+
+
+
+{`# Setting 'ReadBalanceBehavior' on the server by sending an operation:
+with document_store:
+ # Define the client configuration to put on the server
+ client_configuration = ClientConfiguration()
+ # Replace 'FastestNode' (from the example above) with 'RoundRobin'
+ client_configuration.read_balance_behavior = ReadBalanceBehavior.ROUND_ROBIN
+
+ # Define the put configuration operation for the DEFAULT database
+ put_configuration_op = PutClientConfigurationOperation(client_configuration)
+
+ # Execute the operation by passing it to maintenance.send
+ document_store.maintenance.send(put_configuration_op)
+
+ # After the operation has executed:
+ # All WRITE requests will continue to address the preferred node
+ # READ requests, per session, will address a different node based on the RoundRobin logic
+`}
+
+
+
+
+{`# Setting 'ReadBalanceBehavior' on the server by sending an operation:
+with document_store:
+ # Define the client configuration to put on the server
+ client_configuration = ClientConfiguration()
+ # Replace 'FastestNode' (from the example above) with 'RoundRobin'
+ client_configuration.read_balance_behavior = ReadBalanceBehavior.ROUND_ROBIN
+
+ # Define the put configuration operation for the ALL databases
+ put_configuration_op = PutServerWideClientConfigurationOperation(client_configuration)
+
+ # Execute the operation by passing it to maintenance.server.send
+ document_store.maintenance.server.send(put_configuration_op)
+
+ # After the operation has executed:
+ # All WRITE requests will continue to address the preferred node
+ # READ requests, per session, will address a different node based on the RoundRobin logic
+`}
+
+
+
+
+#### Set ReadBalanceBehavior on the server - from Studio:
+
+* The `ReadBalanceBehavior` configuration can be set from the Studio's [Client Configuration view](../../../studio/database/settings/client-configuration-per-database.mdx).
+ Setting it from the Studio will set this configuration directly **on the server**.
+
+* Once configuration on the server has changed, the running client will get updated with the new settings.
+ See [keeping client up-to-date](../../../client-api/configuration/load-balance/overview.mdx#keeping-the-client-topology-up-to-date).
+
+
+
+## When to use
+
+* Setting the read balance behavior is beneficial when you only care about distributing the _Read_ requests among the cluster nodes,
+ and when all _Write_ requests can go to the same node.
+
+* Using the 'FastestNode' option is beneficial when some nodes in the system are known to be faster than others,
+ thus letting the fastest node serve each read request.
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/configuration/load-balance/load-balance-behavior.mdx b/versioned_docs/version-7.1/client-api/configuration/load-balance/load-balance-behavior.mdx
new file mode 100644
index 0000000000..f80d62262f
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/configuration/load-balance/load-balance-behavior.mdx
@@ -0,0 +1,53 @@
+---
+title: "Load balance behavior"
+hide_table_of_contents: true
+sidebar_label: Load balance behavior
+sidebar_position: 2
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import LoadBalanceBehaviorCsharp from './_load-balance-behavior-csharp.mdx';
+import LoadBalanceBehaviorPython from './_load-balance-behavior-python.mdx';
+import LoadBalanceBehaviorPhp from './_load-balance-behavior-php.mdx';
+import LoadBalanceBehaviorNodejs from './_load-balance-behavior-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "python", "php", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/configuration/load-balance/overview.mdx b/versioned_docs/version-7.1/client-api/configuration/load-balance/overview.mdx
new file mode 100644
index 0000000000..289909a6ed
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/configuration/load-balance/overview.mdx
@@ -0,0 +1,106 @@
+---
+title: "Load balancing client requests - Overview"
+hide_table_of_contents: true
+sidebar_label: Overview
+sidebar_position: 0
+---
+
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+# Load balancing client requests - Overview
+
+
+* A database can have multiple instances, each one residing on a different cluster node.
+ Each instance is a complete replica of the database.
+
+* The [database-group-topology](../../../studio/database/settings/manage-database-group.mdx#database-group-topology---view) is the list of nodes that contain those database replicas.
+ The first node in this list is called the [preferred node](../../../client-api/configuration/load-balance/overview.mdx#the-preferred-node).
+
+* The client is kept up-to-date with this topology list.
+ __The client decides which node from this list to access__ when making requests to the RavenDB cluster.
+
+* By default, the client will access the preferred node for all _Read & Write_ requests it makes.
+ This default behavior can be changed by configuring:
+ * [ReadBalanceBehavior](../../../client-api/configuration/load-balance/read-balance-behavior.mdx) - load balancing `Read` requests only
+ * [LoadBalanceBehavior](../../../client-api/configuration/load-balance/load-balance-behavior.mdx) - load balancing `Read & Write` requests
+* In this page:
+ * [Keeping the client topology up-to-date](../../../client-api/configuration/load-balance/overview.mdx#keeping-the-client-topology-up-to-date)
+ * [Client logic for choosing a node](../../../client-api/configuration/load-balance/overview.mdx#client-logic-for-choosing-a-node)
+ * [The preferred node](../../../client-api/configuration/load-balance/overview.mdx#the-preferred-node)
+ * [Single-node session usage](../../../client-api/configuration/load-balance/overview.mdx#single-node-session-usage)
+
+
+## Keeping the client topology up-to-date
+
+* Upon Document Store initialization, the client receives the __initial topology list__,
+ after which the client is kept updated at all times for any changes made to it.
+
+* If the topology list has changed on the server, (or any other client configuration),
+ the client will learn about it upon making its __next request__ to the server,
+ and will update its configuration accordingly.
+
+* In addition, every 5 minutes, the client will fetch the current topology from the server
+ if no requests were made within that time frame.
+
+* Any client-configuration settings that are set on the server side __override__ the settings made on the client-side.
+
+* For more information see [Topology in the client](../../../client-api/cluster/how-client-integrates-with-replication-and-cluster.mdx#cluster-topology-in-the-client).
+
+
+
+## Client logic for choosing a node
+
+The client uses the following logic (from top to bottom) to determine which node to send the request to:
+
+
+* Use the __specified node__:
+ A client can explicitly specify the target node when executing a [server-maintenance operation](../../../client-api/operations/what-are-operations.mdx#server-maintenance-operations).
+ Learn more in [switch operation to a different node](../../../client-api/operations/how-to/switch-operations-to-a-different-node.mdx).
+* Else, if using-session-context is defined, use __LoadBalanceBehavior__:
+ Per session, the client will select a node based on the [session context](../../../client-api/configuration/load-balance/load-balance-behavior.mdx#loadbalancebehavior-options).
+ All `Read & Write` requests made on the session will be directed to that node.
+* Else, if defined, use __ReadBalanceBehavior__:
+ `Read` requests: The client will select a node based on the [read balance options](../../../client-api/configuration/load-balance/read-balance-behavior.mdx#readbalancebehavior-options).
+ `Write` requests: All _Write_ requests will be directed to the preferred node.
+* Else, use the __preferred node__:
+ Use the [preferred node](../../../client-api/configuration/load-balance/overview.mdx#the-preferred-node) for both `Read & Write` requests.
+
+
+
+
+
+## The preferred node
+
+* The preferred node is simply the __first__ node in the [topology nodes list](../../../studio/database/settings/manage-database-group.mdx#database-group-topology---view).
+* __By default__, when no load balancing strategy is defined,
+ the client will send all `Read & Write` requests to this node.
+* When the preferred node is in a failure state,
+ the cluster will update the topology, assigning another node to be the preferred one.
+* Once the preferred node is back up and has caught up with all data,
+ it will be placed __last__ in the topology list.
+* If all the nodes in the topology list are in a failure state then the first node in the list will be the 'preferred'.
+ The user would get an error, or recover if the error was transient.
+* The preferred node can be explicitly set by:
+ * Reordering the topology list from the [Database Group view](../../../studio/database/settings/manage-database-group.mdx#database-group-topology---actions).
+ * Sending [ReorderDatabaseMembersOperation](../../../client-api/operations/server-wide/reorder-database-members.mdx) from the client code.
+* The cluster may assign a different preferred node when removing/adding new nodes to the database-group.
+
+
+
+## Single-node session usage
+
+* When using a [single-node session](../../../client-api/session/cluster-transaction/overview.mdx#single-node),
+ a short delay in replicating changes to all nodes in the cluster is acceptable in most cases.
+
+* If `ReadBalanceBehavior` or `LoadBalanceBehavior` are defined,
+ then the next session you open may access a different node.
+ So if you need to ensure that the next request will be able to _immediately_ read what you just wrote,
+ then use [Write Assurance](../../../client-api/session/saving-changes.mdx#waiting-for-replication---write-assurance).
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/configuration/load-balance/read-balance-behavior.mdx b/versioned_docs/version-7.1/client-api/configuration/load-balance/read-balance-behavior.mdx
new file mode 100644
index 0000000000..2822c87618
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/configuration/load-balance/read-balance-behavior.mdx
@@ -0,0 +1,53 @@
+---
+title: "Read balance behavior"
+hide_table_of_contents: true
+sidebar_label: Read balance behavior
+sidebar_position: 1
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import ReadBalanceBehaviorCsharp from './_read-balance-behavior-csharp.mdx';
+import ReadBalanceBehaviorPython from './_read-balance-behavior-python.mdx';
+import ReadBalanceBehaviorPhp from './_read-balance-behavior-php.mdx';
+import ReadBalanceBehaviorNodejs from './_read-balance-behavior-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "python", "php", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/configuration/serialization.mdx b/versioned_docs/version-7.1/client-api/configuration/serialization.mdx
new file mode 100644
index 0000000000..5fb21b3cf8
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/configuration/serialization.mdx
@@ -0,0 +1,44 @@
+---
+title: "Conventions: Serialization"
+hide_table_of_contents: true
+sidebar_label: Serialization
+sidebar_position: 1
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import SerializationCsharp from './_serialization-csharp.mdx';
+import SerializationJava from './_serialization-java.mdx';
+
+export const supportedLanguages = ["csharp", "java"];
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/creating-document-store.mdx b/versioned_docs/version-7.1/client-api/creating-document-store.mdx
new file mode 100644
index 0000000000..58a28cfa47
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/creating-document-store.mdx
@@ -0,0 +1,49 @@
+---
+title: "Client API: Creating a Document Store"
+hide_table_of_contents: true
+sidebar_label: Creating Document Store
+sidebar_position: 1
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import CreatingDocumentStoreCsharp from './_creating-document-store-csharp.mdx';
+import CreatingDocumentStoreJava from './_creating-document-store-java.mdx';
+import CreatingDocumentStoreNodejs from './_creating-document-store-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/data-subscriptions/_category_.json b/versioned_docs/version-7.1/client-api/data-subscriptions/_category_.json
new file mode 100644
index 0000000000..129df78e9f
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/data-subscriptions/_category_.json
@@ -0,0 +1,4 @@
+{
+ "position": 9,
+ "label": Data Subscriptions,
+}
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/data-subscriptions/_concurrent-subscriptions-csharp.mdx b/versioned_docs/version-7.1/client-api/data-subscriptions/_concurrent-subscriptions-csharp.mdx
new file mode 100644
index 0000000000..d71a2b699c
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/data-subscriptions/_concurrent-subscriptions-csharp.mdx
@@ -0,0 +1,119 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* With **Concurrent Subscriptions**, multiple data subscription workers can connect to the same subscription task simultaneously.
+
+* Each worker is assigned a different batch of documents to process.
+
+* By processing different batches in parallel, multiple workers can significantly accelerate the consumption of the subscription's contents.
+
+* Documents that were assigned to workers whose connection has ended unexpectedly,
+ can be reassigned by the server to available workers.
+ See [connection failure](../../client-api/data-subscriptions/concurrent-subscriptions.mdx#connection-failure) below.
+
+* In this page:
+ * [Defining concurrent workers](../../client-api/data-subscriptions/concurrent-subscriptions.mdx#defining-concurrent-workers)
+ * [Dropping a connection](../../client-api/data-subscriptions/concurrent-subscriptions.mdx#dropping-a-connection)
+ * [Connection failure](../../client-api/data-subscriptions/concurrent-subscriptions.mdx#connection-failure)
+
+
+## Defining concurrent workers
+
+Concurrent workers are defined similarly to other workers, except their
+[strategy](../../client-api/data-subscriptions/consumption/how-to-consume-data-subscription.mdx#worker-strategies)
+is set to [SubscriptionOpeningStrategy.Concurrent](../../client-api/data-subscriptions/consumption/how-to-consume-data-subscription.mdx#multiple-workers-per-subscription-strategy).
+
+* To define a concurrent worker:
+ * Create the worker using [GetSubscriptionWorker](../../client-api/data-subscriptions/consumption/api-overview.mdx#create-the-subscription-worker).
+ * Pass it a [SubscriptionWorkerOptions](../../client-api/data-subscriptions/consumption/api-overview.mdx#subscriptionworkeroptions) instance.
+ * Set the strategy to `SubscriptionOpeningStrategy.Concurrent`
+
+* Usage:
+ * Define two concurrent workers
+
+
+{`// Define concurrent subscription workers
+var subscriptionWorker1 = store.Subscriptions.GetSubscriptionWorker(
+ // Set the worker to connect to the "All Orders" subscription task
+ new SubscriptionWorkerOptions("All Orders")
+ \{
+ // Set Concurrent strategy
+ Strategy = SubscriptionOpeningStrategy.Concurrent,
+ MaxDocsPerBatch = 20
+ \});
+
+var subscriptionWorker2 = store.Subscriptions.GetSubscriptionWorker(
+ new SubscriptionWorkerOptions("All Orders")
+ \{
+ Strategy = SubscriptionOpeningStrategy.Concurrent,
+ MaxDocsPerBatch = 20
+ \});
+`}
+
+
+ * Run both workers
+
+
+{`// Start the concurrent worker. Workers will connect concurrently to the "All Orders" subscription task.
+var subscriptionRuntimeTask1 = subscriptionWorker1.Run(batch =>
+\{
+ // process batch
+ foreach (var item in batch.Items)
+ \{
+ // process item
+ \}
+\});
+
+var subscriptionRuntimeTask2 = subscriptionWorker2.Run(batch =>
+\{
+ // process batch
+ foreach (var item in batch.Items)
+ \{
+ // process item
+ \}
+\});
+`}
+
+
+
+
+
+## Dropping a connection
+
+* Use `Subscriptions.DropSubscriptionWorker` to **forcefully disconnect**
+ the specified worker from the subscription it is connected to.
+
+
+{`public void DropSubscriptionWorker(SubscriptionWorker worker, string database = null)
+`}
+
+
+
+* Usage:
+
+
+{`//drop a concurrent subscription worker
+store.Subscriptions.DropSubscriptionWorker(subscriptionWorker2);
+`}
+
+
+
+
+
+## Connection failure
+
+* When a concurrent worker's connection ends unexpectedly,
+ the server may reassign the documents this worker has been processing to any other concurrent worker that is available.
+* A worker that reconnects after a connection failure will be assigned a **new** batch of documents.
+ It is **not** guaranteed that the new batch will contain the same documents this worker was processing before the disconnection.
+* As a result, documents may be processed more than once:
+ - first by a worker that disconnected unexpectedly without acknowledging the completion of its assigned documents,
+ - and later by other workers the documents are reassigned to.
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/data-subscriptions/_concurrent-subscriptions-nodejs.mdx b/versioned_docs/version-7.1/client-api/data-subscriptions/_concurrent-subscriptions-nodejs.mdx
new file mode 100644
index 0000000000..4db64cbcf2
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/data-subscriptions/_concurrent-subscriptions-nodejs.mdx
@@ -0,0 +1,126 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* With **Concurrent Subscriptions**, multiple data subscription workers can connect to the same subscription task simultaneously.
+
+* Each worker is assigned a different batch of documents to process.
+
+* By processing different batches in parallel, multiple workers can significantly accelerate the consumption of the subscription's contents.
+
+* Documents that were assigned to workers whose connection has ended unexpectedly,
+ can be reassigned by the server to available workers.
+ See [connection failure](../../client-api/data-subscriptions/concurrent-subscriptions.mdx#connection-failure) below.
+
+* In this page:
+ * [Defining concurrent workers](../../client-api/data-subscriptions/concurrent-subscriptions.mdx#defining-concurrent-workers)
+ * [Dropping a connection](../../client-api/data-subscriptions/concurrent-subscriptions.mdx#dropping-a-connection)
+ * [Connection failure](../../client-api/data-subscriptions/concurrent-subscriptions.mdx#connection-failure)
+
+
+## Defining concurrent workers
+
+Concurrent workers are defined similarly to other workers, except their
+[strategy](../../client-api/data-subscriptions/consumption/how-to-consume-data-subscription.mdx#worker-strategies)
+is set to [Concurrent](../../client-api/data-subscriptions/consumption/how-to-consume-data-subscription.mdx#multiple-workers-per-subscription-strategy).
+
+* To define a concurrent worker:
+ * Create the worker using [getSubscriptionWorker](../../client-api/data-subscriptions/consumption/api-overview.mdx#create-the-subscription-worker).
+ * Pass it a [subscription worker options](../../client-api/data-subscriptions/consumption/api-overview.mdx#subscription-worker-options) object.
+ * Set the strategy to `Concurrent`
+
+* Usage:
+ * Define two concurrent workers
+
+
+{`// Define 2 concurrent subscription workers
+// ========================================
+
+const options = \{
+ // Set concurrent strategy
+ strategy: "Concurrent",
+ subscriptionName: "Get all orders",
+ maxDocsPerBatch: 20
+\};
+
+const worker1 = documentStore.subscriptions.getSubscriptionWorker(options);
+const worker2 = documentStore.subscriptions.getSubscriptionWorker(options);
+`}
+
+
+ * Run both workers
+
+
+{`worker1.on("batch", (batch, callback) => \{
+ try \{
+ for (const item of batch.items) \{
+ // Process item
+ \}
+ callback();
+
+ \} catch(err) \{
+ callback(err);
+ \}
+\});
+
+worker2.on("batch", (batch, callback) => \{
+ try \{
+ for (const item of batch.items) \{
+ // Process item
+ \}
+ callback();
+
+ \} catch(err) \{
+ callback(err);
+ \}
+\});
+`}
+
+
+
+
+
+## Dropping a connection
+
+* Use `dropSubscriptionWorker` to **forcefully disconnect**
+ the specified worker from the subscription it is connected to.
+
+* Use `dropConnection` to disconnect ALL workers connected to the specified subscription.
+
+
+
+{`// Drop connection for worker2
+await documentStore.subscriptions.dropSubscriptionWorker(worker2);
+`}
+
+
+
+
+
+{`// Available overloads:
+dropConnection(options);
+dropConnection(options, database);
+dropSubscriptionWorker(worker);
+dropSubscriptionWorker(worker, database);
+`}
+
+
+
+
+
+## Connection failure
+
+* When a concurrent worker's connection ends unexpectedly,
+ the server may reassign the documents this worker has been processing to any other concurrent worker that is available.
+* A worker that reconnects after a connection failure will be assigned a **new** batch of documents.
+ It is **not** guaranteed that the new batch will contain the same documents this worker was processing before the disconnection.
+* As a result, documents may be processed more than once:
+ - first by a worker that disconnected unexpectedly without acknowledging the completion of its assigned documents,
+ - and later by other workers the documents are reassigned to.
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/data-subscriptions/_what-are-data-subscriptions-csharp.mdx b/versioned_docs/version-7.1/client-api/data-subscriptions/_what-are-data-subscriptions-csharp.mdx
new file mode 100644
index 0000000000..cd09e4587d
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/data-subscriptions/_what-are-data-subscriptions-csharp.mdx
@@ -0,0 +1,160 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Data subscriptions provide a reliable and handy way to perform document processing on the client side.
+* The server sends batches of documents to the client.
+ The client then processes the batch and will receive the next one only after it acknowledges the batch was processed.
+ The server persists the processing progress, allowing you to pause and continue the processing.
+
+* In this page:
+ * [Data subscription consumption](../../client-api/data-subscriptions/what-are-data-subscriptions.mdx#data-subscription-consumption)
+ * [What defines a data subscription](../../client-api/data-subscriptions/what-are-data-subscriptions.mdx#what-defines-a-data-subscription)
+ * [Documents processing](../../client-api/data-subscriptions/what-are-data-subscriptions.mdx#documents-processing)
+ * [Progress Persistence](../../client-api/data-subscriptions/what-are-data-subscriptions.mdx#progress-persistence)
+ * [How the worker communicates with the server](../../client-api/data-subscriptions/what-are-data-subscriptions.mdx#how-the-worker-communicates-with-the-server)
+ * [Working with multiple clients](../../client-api/data-subscriptions/what-are-data-subscriptions.mdx#working-with-multiple-clients)
+ * [Data subscriptions usage example](../../client-api/data-subscriptions/what-are-data-subscriptions.mdx#data-subscriptions-usage-example)
+
+
+## Data subscription consumption
+
+* Data subscriptions are consumed by clients, called **Subscription Workers**.
+* You can determine whether workers would be able to connect a subscription
+ [concurrently, or only one at a time](../../client-api/data-subscriptions/consumption/how-to-consume-data-subscription.mdx#worker-interplay).
+* A worker that connects to a data subscription receives a batch of documents, and gets to process it.
+ Depending on the code that the client provided the worker with, processing can take from seconds to hours.
+ When all documents are processed, the worker informs the server of its progress and the server can send it the next batch.
+
+
+
+## What defines a data subscription
+
+Data subscriptions are defined by the server-side definition and by the worker connecting to it:
+
+1. [Subscription Creation Options](../../client-api/data-subscriptions/creation/api-overview.mdx#subscriptioncreationoptions): The documents that will be sent to the worker, it's filtering and projection.
+
+2. [Subscription Worker Options](../../client-api/data-subscriptions/consumption/api-overview.mdx#subscriptionworkeroptions): Worker batch processing logic, batch size, interaction with other connections.
+
+
+
+## Documents processing
+
+Documents are sent in batches and progress will be registered only after the whole batch is processed and acknowledged.
+Documents are always sent in Etag order which means that data that has already been processed and acknowledged won't be sent twice, except for the following scenarios:
+
+1. If the document was changed after it was already sent.
+
+2. If data was received but not acknowledged.
+
+3. In case of subscription failover (`Enterprise feature`), when there is a chance that documents will be processed again, because it's not always possible to find the same starting point on a different machine.
+
+
+If the database has Revisions defined, the subscription can be configured to process pairs
+of subsequent document revisions.
+Read more here: [revisions support](../../client-api/data-subscriptions/advanced-topics/subscription-with-revisioning.mdx)
+
+
+
+
+## Progress Persistence
+
+* The processing progress is persisted on the server and therefore the subscription
+ task can be paused and resumed from the last point it was stopped.
+* The persistence mechanism also ensures that no documents are missed even in the
+ presence of failure, whether it's client-side related, communication, or any other disaster.
+* Subscriptions progress is stored in the cluster level, in the `Enterprise edition`.
+ In the case of a node failure, the processing can be automatically failed over to another node.
+* The usage of **Change Vectors** allows us to continue from a point that is close to
+ the last point reached before failure rather than starting the process from scratch.
+
+
+## How the worker communicates with the server
+
+A worker communicates with the data subscription using a custom protocol on top of a long-lived TCP connection. Each successful batch processing consists of these stages:
+
+1. The server sends documents in a batch.
+
+2. Worker sends acknowledgment message after it finishes processing the batch.
+
+3. The server returns the client a notification that the acknowledgment persistence is done and it is ready to send the next batch.
+
+
+When the responsible node handling the subscription is down, the subscription task can be manually reassigned to another node in the cluster.
+With the Enterprise license, the cluster will automatically reassign the work to another node.
+
+
+* The status of the TCP connection is also used to determine the "state" of the worker process.
+ If the subscription and its workers implement a
+ [One Worker Per Subscription](../../client-api/data-subscriptions/consumption/how-to-consume-data-subscription.mdx#worker-interplay)
+ strategy, as long as the connection is alive the server will not allow
+ other clients to consume the subscription.
+* The TCP connection is kept alive and monitored using "heartbeat" messages.
+ If the connection is found nonfunctional, the current batch progress will be restarted.
+
+See the sequence diagram below that summarizes the lifespan of a subscription connection.
+
+
+
+
+
+## Working with multiple clients
+
+You can use a **Subscription Worker Strategy** to determine whether multiple
+workers of the same subscription can connect to it one by one, or **concurrently**.
+
+* **One Worker Per Subscription Strategies**
+ The one-worker-per-subscription strategies allow workers of the same subscription
+ to connect to it **one worker at a time**, with different strategies to support various
+ inter-worker scenarios.
+ * One worker is allowed to take the place of another in the processing of a subscription.
+ Thanks to subscriptions persistence, the worker will be able to continue the work
+ starting at the point its predecessor got to.
+ * You can also configure a worker to wait for an existing connection to fail and take
+ its place, or to force an existing connection to close.
+ * Read more about these strategies [here](../../client-api/data-subscriptions/consumption/how-to-consume-data-subscription.mdx#one-worker-per-subscription-strategies).
+
+* **Concurrent Subscription Strategy**
+ Using the concurrent subscription strategy, multiple workers of the same subscription can
+ connect to it simultaneously and divide the documents processing load between them to speed it up.
+ * Batch processing is divided between the multiple workers.
+ * Connection failure is handled by assigning batches of failing workers to
+ active available workers.
+ * Read more about this strategy [here](../../client-api/data-subscriptions/concurrent-subscriptions.mdx).
+
+
+
+## Data subscriptions usage example
+
+Data subscriptions are accessible by a document store.
+Here's an example of creating and using a data subscription:
+
+
+
+{`public async Task Worker(IDocumentStore store, CancellationToken cancellationToken)
+\{
+ // Create the ongoing subscription task on the server
+ string subscriptionName = await store.Subscriptions
+ .CreateAsync(x => x.Company == "companies/11");
+
+ // Create a worker on the client that will consume the subscription
+ SubscriptionWorker worker = store.Subscriptions
+ .GetSubscriptionWorker(subscriptionName);
+
+ // Run the worker task and process data received from the subscription
+ Task workerTask = worker.Run(x => x.Items.ForEach(item =>
+ Console.WriteLine($"Order #\{item.Result.Id\} will be shipped via: \{item.Result.ShipVia\}")),
+ cancellationToken);
+
+ await workerTask;
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/data-subscriptions/_what-are-data-subscriptions-java.mdx b/versioned_docs/version-7.1/client-api/data-subscriptions/_what-are-data-subscriptions-java.mdx
new file mode 100644
index 0000000000..c2d1857b87
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/data-subscriptions/_what-are-data-subscriptions-java.mdx
@@ -0,0 +1,134 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Data subscriptions provide a reliable and handy way to perform document processing on the client side.
+* The server sends batches of documents to the client.
+ The client then processes the batch and will receive the next one only after it acknowledges the batch was processed.
+ The server persists the processing progress, allowing you to pause and continue the processing.
+
+* In this page:
+ * [Data subscription consumption](../../client-api/data-subscriptions/what-are-data-subscriptions.mdx#data-subscription-consumption)
+ * [What defines a data subscription](../../client-api/data-subscriptions/what-are-data-subscriptions.mdx#what-defines-a-data-subscription)
+ * [Documents processing](../../client-api/data-subscriptions/what-are-data-subscriptions.mdx#documents-processing)
+ * [Progress Persistence](../../client-api/data-subscriptions/what-are-data-subscriptions.mdx#progress-persistence)
+ * [How the worker communicates with the server](../../client-api/data-subscriptions/what-are-data-subscriptions.mdx#how-the-worker-communicates-with-the-server)
+ * [Working with multiple clients](../../client-api/data-subscriptions/what-are-data-subscriptions.mdx#working-with-multiple-clients)
+ * [Data subscriptions usage example](../../client-api/data-subscriptions/what-are-data-subscriptions.mdx#data-subscriptions-usage-example)
+
+
+
+## Data subscription consumption
+
+Data subscriptions are consumed by clients, called subscription workers. In any given moment, only one worker can be connected to a data subscription.
+A worker connected to a data subscription receives a batch of documents and gets to process it.
+When it's done, depending on the code that the client gave the worker, it can take from seconds to hours. It informs the server about the progress, and the server is ready to send the next batch.
+
+
+
+## What defines a data subscription
+
+Data subscriptions are defined by the server side definition and by the worker connecting to it:
+
+1. [Subscription Creation Options](../../client-api/data-subscriptions/creation/api-overview.mdx#subscriptioncreationoptions): The documents that will be received, it's filtering and projection.
+
+2. [Subscription Worker Options](../../client-api/data-subscriptions/consumption/api-overview.mdx#subscriptionworkeroptions): Worker batch processing logic, batch size, interaction with other connections.
+
+
+
+## Documents processing
+
+Documents are sent in batches and progress will be registered only after the whole batch is processed and acknowledged.
+Documents are always sent in Etag order which means that data that already been processed and acknowledged won't be sent twice, except for the following scenarios:
+
+1. If the document was changed after it was already sent.
+
+2. If data was received but not acknowledged.
+
+3. In case of subscription failover (`Enterprise feature`), when there is a chance that documents will be processed again, because it's not always possible to find the same starting point on a different machine.
+
+
+If the database has Revisions defined, the subscription can be configured to process pairs
+of subsequent document revisions.
+Read more here: [revisions support](../../client-api/data-subscriptions/advanced-topics/subscription-with-revisioning.mdx)
+
+
+
+
+## Progress Persistence
+
+Processing progress is persisted and therefore it can be paused and resumed from the last point it was stopped.
+The persistence mechanism also ensures that no documents are missed even in the presence of failure, whether it's client side related, communication, or any other disaster.
+Subscriptions progress is stored in the cluster level, in the `Enterprise edition`. In the case of node failure, the processing can be automatically failed over to another node.
+The usage of Change Vectors allows us to continue from a point that is close to the last point reached before failure rather than starting the process from scratch.
+
+
+## How the worker communicates with the server
+
+A worker communicates with the data subscription using a custom protocol on top of a long-lived TCP connection. Each successful batch processing consists of these stages:
+
+1. The server sends documents a batch.
+
+2. Worker sends acknowledgment message after it finishes processing the batch.
+
+3. The server returns the client a notification that the acknowledgment persistence is done and it is ready to send the next batch.
+
+
+When the responsible node handling the subscription is down, the subscription task can be manually reassigned to another node in the cluster.
+With the Enterprise license the cluster will automatically reassign the work to another node.
+
+
+The TCP connection is also used as the "state" of the worker process and as long as it's alive, the server will not allow other clients to consume the subscription.
+The TCP connection is kept alive and monitored using "heartbeat" messages. If it's found nonfunctional, the current batch progress will be restarted.
+
+See the sequence diagram below that summarizes the lifetime of a subscription connection.
+
+
+
+
+
+## Working with multiple clients
+
+In order to support various inter-worker scenarios, one worker is allowed to take the place of another in the processing of a subscription.
+Thanks to subscriptions persistence, the worker will be able to continue the work from the point it's predecessor stopped.
+
+It's possible to configure that a worker will wait for an existing connection to fail, and take it's place, or we can configure it to force close an existing connection etc. See more in [Workers interplay](../../client-api/data-subscriptions/consumption/how-to-consume-data-subscription.mdx#workers-interplay).
+
+
+
+## Data subscriptions usage example
+
+Data subscriptions are accessible by a document store. Here's an example of an ad-hoc creation and usage of data subscriptions:
+
+
+
+{`public void worker(IDocumentStore store) \{
+
+ // Create the ongoing subscription task on the server
+ SubscriptionCreationOptions options = new SubscriptionCreationOptions();
+ options.setQuery("from Orders where Company = 'companies/11'");
+ String subscriptionName = store.subscriptions().create(Order.class, options);
+
+ // Create a worker on the client that will consume the subscription
+ SubscriptionWorker worker = store
+ .subscriptions().getSubscriptionWorker(Order.class, subscriptionName);
+
+ // Run the worker task and process data received from the subscription
+ worker.run(x -> \{
+ for (SubscriptionBatch.Item item : x.getItems()) \{
+ System.out.println("Order #"
+ + item.getResult().getId()
+ + " will be shipped via: " + item.getResult().getShipVia());
+ \}
+ \});
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/data-subscriptions/_what-are-data-subscriptions-nodejs.mdx b/versioned_docs/version-7.1/client-api/data-subscriptions/_what-are-data-subscriptions-nodejs.mdx
new file mode 100644
index 0000000000..12e6eedbd2
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/data-subscriptions/_what-are-data-subscriptions-nodejs.mdx
@@ -0,0 +1,133 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Data subscriptions provide a reliable and handy way to perform document processing on the client side.
+* The server sends batches of documents to the client.
+ The client then processes the batch and will receive the next one only after it acknowledges the batch was processed.
+ The server persists the processing progress, allowing you to pause and continue the processing.
+
+* In this page:
+ * [Data subscription consumption](../../client-api/data-subscriptions/what-are-data-subscriptions.mdx#data-subscription-consumption)
+ * [What defines a data subscription](../../client-api/data-subscriptions/what-are-data-subscriptions.mdx#what-defines-a-data-subscription)
+ * [Documents processing](../../client-api/data-subscriptions/what-are-data-subscriptions.mdx#documents-processing)
+ * [Progress Persistence](../../client-api/data-subscriptions/what-are-data-subscriptions.mdx#progress-persistence)
+ * [How the worker communicates with the server](../../client-api/data-subscriptions/what-are-data-subscriptions.mdx#how-the-worker-communicates-with-the-server)
+ * [Working with multiple clients](../../client-api/data-subscriptions/what-are-data-subscriptions.mdx#working-with-multiple-clients)
+ * [Data subscriptions usage example](../../client-api/data-subscriptions/what-are-data-subscriptions.mdx#data-subscriptions-usage-example)
+
+
+
+## Data subscription consumption
+
+Data subscriptions are consumed by clients, called subscription workers. In any given moment, only one worker can be connected to a data subscription.
+A worker connected to a data subscription receives a batch of documents and gets to process it.
+When it's done, depending on the code that the client gave the worker, it can take from seconds to hours. It informs the server about the progress, and the server is ready to send the next batch.
+
+
+
+## What defines a data subscription
+
+Data subscriptions are defined by the server side definition and by the worker connecting to it:
+
+1. [Subscription Creation Options](../../client-api/data-subscriptions/creation/api-overview.mdx#subscriptioncreationoptions): The documents that will be received, it's filtering and projection.
+
+2. [Subscription Worker Options](../../client-api/data-subscriptions/consumption/api-overview.mdx#subscriptionworkeroptions): Worker batch processing logic, batch size, interaction with other connections.
+
+
+
+## Documents processing
+
+Documents are sent in batches and progress will be registered only after the whole batch is processed and acknowledged.
+Documents are always sent in Etag order which means that data that already been processed and acknowledged won't be sent twice, except for the following scenarios:
+
+1. If the document was changed after it was already sent.
+
+2. If data was received but not acknowledged.
+
+3. In case of subscription failover (`Enterprise feature`), when there is a chance that documents will be processed again, because it's not always possible to find the same starting point on a different machine.
+
+
+If the database has Revisions defined, the subscription can be configured to process pairs
+of subsequent document revisions.
+Read more here: [revisions support](../../client-api/data-subscriptions/advanced-topics/subscription-with-revisioning.mdx)
+
+
+
+
+## Progress Persistence
+
+Processing progress is persisted and therefore it can be paused and resumed from the last point it was stopped.
+The persistence mechanism also ensures that no documents are missed even in the presence of failure, whether it's client side related, communication, or any other disaster.
+Subscriptions progress is stored in the cluster level, in the `Enterprise edition`. In the case of node failure, the processing can be automatically failed over to another node.
+The usage of Change Vectors allows us to continue from a point that is close to the last point reached before failure rather than starting the process from scratch.
+
+
+## How the worker communicates with the server
+
+A worker communicates with the data subscription using a custom protocol on top of a long-lived TCP connection. Each successful batch processing consists of these stages:
+
+1. The server sends documents a batch.
+
+2. Worker sends acknowledgment message after it finishes processing the batch.
+
+3. The server returns the client a notification that the acknowledgment persistence is done and it is ready to send the next batch.
+
+
+When the responsible node handling the subscription is down, the subscription task can be manually reassigned to another node in the cluster.
+With the Enterprise license the cluster will automatically reassign the work to another node.
+
+
+The TCP connection is also used as the "state" of the worker process and as long as it's alive, the server will not allow other clients to consume the subscription.
+The TCP connection is kept alive and monitored using "heartbeat" messages. If it's found nonfunctional, the current batch progress will be restarted.
+
+See the sequence diagram below that summarizes the lifetime of a subscription connection.
+
+
+
+
+
+## Working with multiple clients
+
+In order to support various inter-worker scenarios, one worker is allowed to take the place of another in the processing of a subscription.
+Thanks to subscriptions persistence, the worker will be able to continue the work from the point it's predecessor stopped.
+
+It's possible to configure that a worker will wait for an existing connection to fail, and take it's place, or we can configure it to force close an existing connection etc. See more in [Workers interplay](../../client-api/data-subscriptions/consumption/how-to-consume-data-subscription.mdx#workers-interplay).
+
+
+
+## Data subscriptions usage example
+
+Data subscriptions are accessible by a document store. Here's an example of an ad-hoc creation and usage of data subscriptions:
+
+
+
+{`async function worker() \{
+
+ // Create the ongoing subscription task on the server
+ const subscriptionName = await store.subscriptions.create(\{
+ query: "from Orders where Company = 'companies/11'"
+ \});
+
+ // Create a worker on the client that will consume the subscription
+ const worker = store.subscriptions.getSubscriptionWorker(subscriptionName);
+
+ // Listen for and process data received in batches from the subscription
+ worker.on("batch", (batch, callback) => \{
+ for (const item of batch.items) \{
+ console.log(\`Order #$\{item.result.Id\} will be shipped via: $\{item.result.ShipVia\}\`);
+ \}
+
+ callback();
+ \});
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/data-subscriptions/advanced-topics/_category_.json b/versioned_docs/version-7.1/client-api/data-subscriptions/advanced-topics/_category_.json
new file mode 100644
index 0000000000..c206697952
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/data-subscriptions/advanced-topics/_category_.json
@@ -0,0 +1,4 @@
+{
+ "position": 3,
+ "label": Advanced topics,
+}
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/data-subscriptions/advanced-topics/_maintenance-operations-csharp.mdx b/versioned_docs/version-7.1/client-api/data-subscriptions/advanced-topics/_maintenance-operations-csharp.mdx
new file mode 100644
index 0000000000..fcc71d4ce2
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/data-subscriptions/advanced-topics/_maintenance-operations-csharp.mdx
@@ -0,0 +1,206 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* This article covers data subscriptions maintenance operations.
+
+* In this page:
+ * [DocumentSubscriptions class](../../../client-api/data-subscriptions/advanced-topics/maintenance-operations.mdx#documentsubscriptions-class)
+ * [Delete subscription](../../../client-api/data-subscriptions/advanced-topics/maintenance-operations.mdx#delete-subscription)
+ * [Disabling subscription](../../../client-api/data-subscriptions/advanced-topics/maintenance-operations.mdx#disable-subscription)
+ * [Enable subscription](../../../client-api/data-subscriptions/advanced-topics/maintenance-operations.mdx#enable-subscription)
+ * [Update subscription](../../../client-api/data-subscriptions/advanced-topics/maintenance-operations.mdx#update-subscription)
+ * [Drop Connection](../../../client-api/data-subscriptions/advanced-topics/maintenance-operations.mdx#drop-connection)
+ * [Get subscription state](../../../client-api/data-subscriptions/advanced-topics/maintenance-operations.mdx#get-subscription-state)
+
+
+## DocumentSubscriptions class
+
+The `DocumentSubscriptions` class is the class that manages all interaction with the data subscriptions.
+The class is available through `DocumentStore`'s `Subscriptions` property.
+
+| Method Signature | Return type | Description |
+|---------------------------------------------------------------------------------------------------------------|---------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------|
+| **Create<T>(SubscriptionCreationOptions<T> options, string database)** | `string` | Create a new data subscription. |
+| **Create(SubscriptionCreationOptions criteria, string database)** | `string` | Create a new data subscription. |
+| **Create(SubscriptionCreationOptions criteria, string database)** | `string` | Create a new data subscription. |
+| **CreateAsync<T>(SubscriptionCreationOptions<T> options, string database)** | `Task` | Create a new data subscription. |
+| **CreateAsync<T>(Expression<Func<T, bool>> predicate, SubscriptionCreationOptions options, string database)** | `Task` | Create a new data subscription. |
+| **Delete(string name, string database)** | `void` | Delete subscription. |
+| **DeleteAsync(string name, string database)** | `Task` | Delete subscription. |
+| **DropConnection(string name, string database)** | `void` | Drop all existing subscription connections with workers. |
+| **DropConnectionAsync(string name, string database)** | `Task` | Drop all existing subscription connections with workers. |
+| **DropSubscriptionWorker<T>(SubscriptionWorker<T> worker, string database = null)** | `void` | Drop an existing subscription connection with a worker |
+| **Enable(string name, string database)** | `void` | Enable existing subscription. |
+| **EnableAsync(string name, string database)** | `Task` | Enable existing subscription. |
+| **Disable(string name, string database)** | `void` | Disable existing subscription. |
+| **DisableAsync(string name, string database)** | `Task` | Disable existing subscription. |
+| **GetSubscriptions(int start, int take, string database)** | `List` | Returns subscriptions list. |
+| **GetSubscriptionsAsync(int start, int take, string database)** | `Task>` | Returns subscriptions list. |
+| **GetSubscriptionState(string subscriptionName, string database)** | `SubscriptionState ` | Get specific subscription state. |
+| **GetSubscriptionStateAsync(string subscriptionName, string database)** | `Task ` | Get specific subscription state. |
+| **GetSubscriptionWorker<T>(string subscriptionName, string database)** | `SubscriptionWorker` | Generate a subscription worker, using default configurations, that processes documents deserialized to `T` type . |
+| **GetSubscriptionWorker(string subscriptionName, string database)** | `SubscriptionWorker` | Generate a subscription worker, using default configurations, that processes documents in its raw `BlittableJsonReader`, wrapped by dynamic object. |
+| **GetSubscriptionWorker(SubscriptionWorkerOptions options, string database)** | `SubscriptionWorker` | Generate a subscription worker, using default configurations, that processes documents deserialized to `T` type . |
+| **GetSubscriptionWorker(SubscriptionWorkerOptions options, string database)** | `SubscriptionWorker` | Generate a subscription worker, using default configurations, that processes documents in its raw `BlittableJsonReader`, wrapped by dynamic object. |
+| **Update(SubscriptionUpdateOptions options, string database = null)** | `string` | Update an existing data subscription. |
+| **UpdateAsync(SubscriptionUpdateOptions options, string database = null, CancellationToken token = default)** | `Task` | Update an existing data subscription. |
+
+
+
+## Delete subscription
+
+Subscriptions can be entirely deleted from the system.
+
+This operation can be very useful in ad-hoc subscription scenarios when a lot of subscriptions tasks information may accumulate, making tasks management very hard.
+
+
+
+{`void Delete(string name, string database = null);
+Task DeleteAsync(string name, string database = null, CancellationToken token = default);
+`}
+
+
+
+usage:
+
+
+
+{`store.Subscriptions.Delete(subscriptionName);
+`}
+
+
+
+
+
+## Disable subscription
+
+Existing subscription tasks can be disabled from the client.
+
+
+
+{`void Disable(string name, string database = null);
+Task DisableAsync(string name, string database = null, CancellationToken token = default);
+`}
+
+
+
+usage:
+
+
+
+{`store.Subscriptions.Disable(subscriptionName);
+`}
+
+
+
+
+
+## Enable subscription
+
+Existing subscription tasks can be enabled from the client.
+This operation can be useful for already disabled subscriptions.
+
+
+
+{`void Enable(string name, string database = null);
+Task EnableAsync(string name, string database = null, CancellationToken token = default);
+`}
+
+
+
+usage:
+
+
+
+{`store.Subscriptions.Enable(subscriptionName);
+`}
+
+
+
+
+
+## Update subscription
+
+See [examples](../../../client-api/data-subscriptions/creation/examples.mdx#update-existing-subscription)
+and [API description](../../../client-api/data-subscriptions/creation/api-overview.mdx#update-subscription).
+
+
+
+{`string Update(SubscriptionUpdateOptions options, string database = null);
+
+Task UpdateAsync(SubscriptionUpdateOptions options, string database = null,
+ CancellationToken token = default);
+`}
+
+
+
+
+
+## Drop connection
+
+Active subscription connections established by workers can be dropped remotely from the client.
+Once dropped, the worker will not attempt to reconnect to the server.
+
+
+
+{`void DropConnection(string name, string database = null);
+Task DropConnectionAsync(string name, string database = null, CancellationToken token = default);
+`}
+
+
+
+usage:
+
+
+
+{`store.Subscriptions.DropConnection(subscriptionName);
+`}
+
+
+
+
+
+## Get subscription state
+
+
+
+{`SubscriptionState GetSubscriptionState(string subscriptionName, string database = null);
+Task GetSubscriptionStateAsync(string subscriptionName, string database = null, CancellationToken token = default);
+`}
+
+
+
+usage:
+
+
+
+{`var subscriptionState = store.Subscriptions.GetSubscriptionState(subscriptionName);
+`}
+
+
+
+
+
+##### SubscriptionState
+
+| Member | Type | Description |
+|-------------------------------------------|-------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **Query** | `string` | Subscription's RQL like query. |
+| **LastBatchAckTime** | `DateTime?` | Last time a batch processing progress was acknowledged. |
+| **NodeTag** | `string` | Processing server's node tag. |
+| **MentorNode** | `string` | The mentor node that was manually set. |
+| **SubscriptionName** | `string` | The subscription's name, which is also its unique identifier. |
+| **SubscriptionId** | `long` | Subscription's internal identifier (cluster's operation etag during subscription creation). |
+| **ChangeVectorForNextBatchStartingPoint** | `string` | The Change Vector from which the subscription will begin sending documents.<br.This value is updated on batch acknowledgement and can also be set manually. |
+| **Disabled** | `bool` | If `true`, subscription will not allow workers to connect. |
+| **LastClientConnectionTime** | `DateTime?` | Time when last client was connected (value sustained after disconnection). |
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/data-subscriptions/advanced-topics/_maintenance-operations-java.mdx b/versioned_docs/version-7.1/client-api/data-subscriptions/advanced-topics/_maintenance-operations-java.mdx
new file mode 100644
index 0000000000..16aecf7986
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/data-subscriptions/advanced-topics/_maintenance-operations-java.mdx
@@ -0,0 +1,160 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* This page covers data subscriptions maintenance operations:
+ * [Deleting subscription](../../../client-api/data-subscriptions/advanced-topics/maintenance-operations.mdx#deleting-subscription)
+ * [Dropping connection](../../../client-api/data-subscriptions/advanced-topics/maintenance-operations.mdx#dropping-connection)
+ * [Disabling subscription](../../../client-api/data-subscriptions/advanced-topics/maintenance-operations.mdx#disabling-subscription)
+ * [Updating subscription](../../../client-api/data-subscriptions/advanced-topics/maintenance-operations.mdx#updating-subscription)
+ * [Getting subscription status](../../../client-api/data-subscriptions/advanced-topics/maintenance-operations.mdx#getting-subscription-status)
+
+
+## Deleting subscription
+
+Subscriptions can be entirely deleted from the system.
+
+This operation can be very useful in ad-hoc subscription scenarios when a lot of subscriptions tasks information may accumulate, making tasks management very hard.
+
+
+
+{`void delete(String name);
+
+void delete(String name, String database);
+`}
+
+
+
+usage:
+
+
+
+{`store.subscriptions().delete(subscriptionName);
+`}
+
+
+
+
+
+## Dropping connection
+
+Subscription connections with workers can be dropped remotely.
+A dropped worker will not try to reconnect to the server.
+
+
+
+{`void dropConnection(String name);
+
+void dropConnection(String name, String database);
+`}
+
+
+
+usage:
+
+
+
+{`store.subscriptions().dropConnection(subscriptionName);
+`}
+
+
+
+
+
+## Disabling subscription
+
+
+This operation can only be performed through the management studio
+
+
+
+
+## Updating subscription
+
+
+This operation can only be performed through the management studio
+
+
+
+
+## Getting subscription status
+
+
+
+{`SubscriptionState getSubscriptionState(String subscriptionName);
+
+SubscriptionState getSubscriptionState(String subscriptionName, String database);
+`}
+
+
+
+usage:
+
+
+
+{`SubscriptionState subscriptionState = store.subscriptions().getSubscriptionState(subscriptionName);
+`}
+
+
+
+
+
+| Member | Type | Description |
+|--------|:-----|-------------|
+| **query** | `String` | Subscription's RQL like query. |
+| **lastBatchAckTime** | `Date` | Last time a batch processing progress was acknowledged. |
+| **nodeTag** | `String` | Processing server's node tag |
+| **mentorNode** | `String` | The mentor node that was manually set. |
+| **subscriptionName** | `String` | Subscription's name, and also it's unique identifier |
+| **subscriptionId** | `long` | Subscription's internal identifier (cluster's operation etag during subscription creation) |
+| **changeVectorForNextBatchStartingPoint** | `String` | Change vector, starting from which the subscription will send documents. This value is updated manually, or automatically on batch acknowledgment |
+| **disabled** | `boolean` | If true, subscription will not allow workers to connect |
+| **lastClientConnectionTime** | `Date` | Time when last client was connected (value sustained after disconnection) |
+
+
+
+
+
+## DocumentSubscriptions class
+
+The `DocumentSubscriptions` class is the class that manages all interaction with the data subscriptions.
+The class is available through `DocumentStore`'s `subscriptions()` method.
+
+| Method Signature| Return type | Description |
+|--------|:---|-------------|
+| **create(SubscriptionCreationOptions options)** | `String` | Creates a new data subscription. |
+| **create(SubscriptionCreationOptions options, String database)** | `String` | Creates a new data subscription. |
+| **create(SubscriptionCreationOptions options)** | `String` | Creates a new data subscription. |
+| **create(Class<T> clazz)** | `String` | Creates a new data subscription. |
+| **create(Class<T> clazz, SubscriptionCreationOptions options)** | `String` | Creates a new data subscription. |
+| **create(Class<T> clazz, SubscriptionCreationOptions options, String database)** | `String` | Creates a new data subscription. |
+| **createForRevisions(Class<T> clazz)** | `String` | Creates a new data subscription. |
+| **createForRevisions(Class<T> clazz, SubscriptionCreationOptions options)** | `String` | Creates a new data subscription. |
+| **createForRevisions(Class<T> clazz, SubscriptionCreationOptions options, String database)** | `String` | Creates a new data subscription. |
+| **delete(String name)** | `void` | Deletes subscription. |
+| **delete(String name, String database)** | `void` | Deletes subscription. |
+| **dropConnection(String name)** | `void` | Drops existing subscription connection with worker. |
+| **dropConnection(String name, String database)** | `void` | Drops existing subscription connection with worker. |
+| **getSubscriptions(int start, int take)** | `List` | Returns subscriptions list. |
+| **getSubscriptions(int start, int take, String database)** | `List` | Returns subscriptions list. |
+| **getSubscriptionState(String subscriptionName)** | `SubscriptionState ` | Get specific subscription state. |
+| **getSubscriptionState(String subscriptionName, String database)** | `SubscriptionState ` | Get specific subscription state. |
+| **getSubscriptionWorker(string subscriptionName)** | `SubscriptionWorker` | Generates a subscription worker, using default configurations, that processes documents in it's raw `ObjectNode` type . |
+| **getSubscriptionWorker(string subscriptionName, String database)** | `SubscriptionWorker` | Generates a subscription worker, using default configurations, that processes documents in it's raw `ObjectNode` type . |
+| **getSubscriptionWorker(SubscriptionWorkerOptions options)** | `SubscriptionWorker` | Generates a subscription worker, using default configurations, that processes documents in it's raw `ObjectNode` type . |
+| **getSubscriptionWorker(SubscriptionWorkerOptions options, String database)** | `SubscriptionWorker` | Generates a subscription worker, using default configurations, that processes documents in it's raw `ObjectNode` type . |
+| **getSubscriptionWorker<T>(Class<T> clazz, String subscriptionName)** | `SubscriptionWorker` | Generates a subscription worker, using default configurations, that processes documents deserialized to `T` type . |
+| **getSubscriptionWorker<T>(Class<T> clazz, String subscriptionName, String database)** | `SubscriptionWorker` | Generates a subscription worker, using default configurations, that processes documents deserialized to `T` type . |
+| **getSubscriptionWorker<T>(Class<T> clazz, SubscriptionWorkerOptions options)** | `SubscriptionWorker` | Generates a subscription worker, using provided configuration, that processes documents deserialized to `T` type . |
+| **getSubscriptionWorker<T>(Class<T> clazz, SubscriptionWorkerOptions options, String database)** | `SubscriptionWorker` | Generates a subscription worker, using provided configuration, that processes documents deserialized to `T` type . |
+| **getSubscriptionWorkerForRevisions<T>(Class<T> clazz, String subscriptionName)** | `SubscriptionWorker` | Generates a subscription worker, using default configurations, that processes documents deserialized to `T` type . |
+| **getSubscriptionWorkerForRevisions<T>(Class<T> clazz, String subscriptionName, String database)** | `SubscriptionWorker` | Generates a subscription worker, using default configurations, that processes documents deserialized to `T` type . |
+| **getSubscriptionWorkerForRevisions<T>(Class<T> clazz, SubscriptionWorkerOptions options)** | `SubscriptionWorker` | Generates a subscription worker, using provided configuration, that processes documents deserialized to `T` type . |
+| **getSubscriptionWorkerForRevisions<T>(Class<T> clazz, SubscriptionWorkerOptions options, String database)** | `SubscriptionWorker` | Generates a subscription worker, using provided configuration, that processes documents deserialized to `T` type . |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/data-subscriptions/advanced-topics/_maintenance-operations-nodejs.mdx b/versioned_docs/version-7.1/client-api/data-subscriptions/advanced-topics/_maintenance-operations-nodejs.mdx
new file mode 100644
index 0000000000..a62165b301
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/data-subscriptions/advanced-topics/_maintenance-operations-nodejs.mdx
@@ -0,0 +1,252 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* This article covers data subscriptions maintenance operations.
+
+* In this page:
+ * [Delete subscription](../../../client-api/data-subscriptions/advanced-topics/maintenance-operations.mdx#delete-subscription)
+ * [Disable subscription](../../../client-api/data-subscriptions/advanced-topics/maintenance-operations.mdx#disable-subscription)
+ * [Enable subscription](../../../client-api/data-subscriptions/advanced-topics/maintenance-operations.mdx#enable-subscription)
+ * [Update subscription](../../../client-api/data-subscriptions/advanced-topics/maintenance-operations.mdx#update-subscription)
+ * [Drop connection](../../../client-api/data-subscriptions/advanced-topics/maintenance-operations.mdx#drop-connection)
+ * [Get subscriptions](../../../client-api/data-subscriptions/advanced-topics/maintenance-operations.mdx#get-subscriptions)
+ * [Get subscription state](../../../client-api/data-subscriptions/advanced-topics/maintenance-operations.mdx#get-subscription-state)
+ * [DocumentSubscriptions class](../../../client-api/data-subscriptions/advanced-topics/maintenance-operations.mdx#documentsubscriptions-class)
+
+
+## Delete subscription
+
+Subscription tasks can be entirely deleted from the system.
+
+
+
+{`await documentStore.subscriptions.delete("subscriptionNameToDelete");
+`}
+
+
+
+
+{`// Available overloads:
+delete(name);
+delete(name, database);
+`}
+
+
+
+
+
+## Disable subscription
+
+Existing subscription tasks can be disabled from the client.
+
+
+
+{`await documentStore.subscriptions.disable("subscriptionNameToDisable");
+`}
+
+
+
+
+{`// Available overloads:
+disable(name);
+disable(name, database);
+`}
+
+
+
+
+
+## Enable subscription
+
+Existing subscription tasks can be enabled from the client.
+This operation can be useful for already disabled subscriptions.
+
+
+
+{`await documentStore.subscriptions.enable("subscriptionNameToEnable");
+`}
+
+
+
+
+{`// Available overloads:
+enable(name);
+enable(name, database);
+`}
+
+
+
+
+
+## Update subscription
+
+See [examples](../../../client-api/data-subscriptions/creation/examples.mdx#update-existing-subscription)
+and [API description](../../../client-api/data-subscriptions/creation/api-overview.mdx#update-subscription).
+
+
+
+{`const updateOptions = \{
+ id: "",
+ query: ""
+ // ...
+\}
+await documentStore.subscriptions.update(updateOptions);
+`}
+
+
+
+
+{`// Available overloads:
+update(options);
+update(options, database);
+`}
+
+
+
+
+
+## Drop connection
+
+Active subscription connections established by workers can be dropped remotely from the client.
+Once dropped, the worker will not attempt to reconnect to the server.
+
+
+
+{`// Drop all connections to the subscription:
+// =========================================
+
+await documentStore.subscriptions.dropConnection("subscriptionName");
+
+// Drop specific worker connection:
+// ===============================
+
+const workerOptions = \{
+ subscriptionName: "subscriptionName",
+ // ...
+\};
+
+const worker = documentStore.subscriptions.getSubscriptionWorker(workerOptions);
+
+
+worker.on("batch", (batch, callback) => \{
+ // worker processing logic
+\});
+
+await documentStore.subscriptions.dropConnection(worker);
+`}
+
+
+
+
+{`// Available overloads:
+dropConnection(options);
+dropConnection(options, database);
+dropSubscriptionWorker(worker);
+dropSubscriptionWorker(worker, database);
+`}
+
+
+
+
+
+## Get subscriptions
+
+Get a list of all existing subscription tasks in the database.
+
+
+
+{`const subscriptions = await documentStore.subscriptions.getSubscriptions(0, 10);
+`}
+
+
+
+
+{`// Available overloads:
+getSubscriptions(start, take);
+getSubscriptions(start, take, database);
+`}
+
+
+
+
+
+## Get subscription state
+
+
+
+{`const subscriptionState =
+ await documentStore.subscriptions.getSubscriptionState("subscriptionName");
+`}
+
+
+
+
+{`// Available overloads:
+getSubscriptionState(subscriptionName);
+getSubscriptionState(subscriptionName, database);
+`}
+
+
+
+
+
+##### SubscriptionState
+
+| Member | Type | Description |
+|-------------------------------------------|-----------|-------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **query** | `string` | Subscription's RQL like query. |
+| **lastBatchAckTime** | `string` | Last time a batch processing progress was acknowledged. |
+| **nodeTag** | `string` | Processing server's node tag. |
+| **mentorNode** | `string` | The mentor node that was manually set. |
+| **subscriptionName** | `string` | The subscription's name, which is also its unique identifier. |
+| **subscriptionId** | `number` | Subscription's internal identifier (cluster's operation etag during subscription creation). |
+| **changeVectorForNextBatchStartingPoint** | `string` | The Change Vector from which the subscription will begin sending documents. This value is updated on batch acknowledgement and can also be set manually. |
+| **disabled** | `boolean` | If `true`, subscription will not allow workers to connect. |
+| **lastClientConnectionTime** | `string` | Time when last client was connected (value sustained after disconnection). |
+
+
+
+
+
+## DocumentSubscriptions class
+
+The `DocumentSubscriptions` class manages all interaction with the data subscriptions.
+The class is available through the `subscriptions` property in the `documentStore`.
+
+| Method Signature | Return type | Description |
+|----------------------------------------------------------|--------------------------------|--------------------------------------------------------------|
+| **create(options)** | `Promise` | Create a new data subscription. |
+| **create(options, database)** | `Promise` | Create a new data subscription. |
+| **create(documentType)** | `Promise` | Create a new data subscription. |
+| **create(optionsOrDocumentType, database)** | `Promise` | Create a new data subscription. |
+| **createForRevisions(options)** | `Promise` | Create a new data subscription. |
+| **createForRevisions(options, database)** | `Promise` | Create a new data subscription. |
+| **delete(name)** | `Promise` | Delete subscription. |
+| **delete(name, database)** | `Promise` | Delete subscription. |
+| **dropConnection(name)** | `Promise` | Drop all existing subscription connections with workers. |
+| **dropConnection(name, database)** | `Promise` | Drop all existing subscription connections with workers. |
+| **dropSubscriptionWorker(worker, database)** | `Promise` | Drop an existing subscription connection with a worker. |
+| **enable(name)** | `Promise` | Enable existing subscription. |
+| **enable(name, database)** | `Promise` | Enable existing subscription. |
+| **disable(name)** | `Promise` | Disable existing subscription. |
+| **disable(name, database)** | `Promise` | Disable existing subscription. |
+| **update(updateOptions)** | `Promise` | Update an existing data subscription. |
+| **update(updateOptions, database)** | `Promise` | Update an existing data subscription. |
+| **getSubscriptions(start, take)** | `Promise` | Returns subscriptions list. |
+| **getSubscriptions(start, take, database)** | `Promise` | Returns subscriptions list. |
+| **getSubscriptionState(subscriptionName)** | `Promise ` | Get the state of a specific subscription. |
+| **getSubscriptionState(subscriptionName, database)** | `Promise ` | Get the state of a specific subscription. |
+| **getSubscriptionWorker(options)** | `SubscriptionWorker` | Generate a subscription worker. |
+| **getSubscriptionWorker(options, database)** | `SubscriptionWorker` | Generate a subscription worker. |
+| **getSubscriptionWorker(subscriptionName)** | `SubscriptionWorker` | Generate a subscription worker. |
+| **getSubscriptionWorker(subscriptionName, database)** | `SubscriptionWorker` | Generate a subscription worker. |
+| **getSubscriptionWorkerForRevisions(options)** | `SubscriptionWorker` | Generate a subscription worker for a revisions subscription. |
+| **getSubscriptionWorkerForRevisions(options, database)** | `SubscriptionWorker` | Generate a subscription worker for a revisions subscription. |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/data-subscriptions/advanced-topics/_subscription-with-revisioning-csharp.mdx b/versioned_docs/version-7.1/client-api/data-subscriptions/advanced-topics/_subscription-with-revisioning-csharp.mdx
new file mode 100644
index 0000000000..cbb54c9e40
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/data-subscriptions/advanced-topics/_subscription-with-revisioning-csharp.mdx
@@ -0,0 +1,312 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* When the [Revisions feature](../../../document-extensions/revisions/overview.mdx) is enabled, a document revision is created with each change made to the document.
+ Each revision contains a snapshot of the document at the time of modification, forming a complete audit trail.
+
+* The **Data Subscription** feature supports subscribing not only to documents but also to their **revisions**.
+ This functionality allows the subscribed client to track changes made to documents over time.
+
+* The revisions support is specified within the subscription definition.
+ See how to create and consume it in the examples below.
+
+* In this page:
+ * [Regular subscription vs Revisions subscription](../../../client-api/data-subscriptions/advanced-topics/subscription-with-revisioning.mdx#regular-subscription-vs-revisions-subscription)
+ * [Revisions processing order](../../../client-api/data-subscriptions/advanced-topics/subscription-with-revisioning.mdx#revisions-processing-order)
+ * [Simple creation and consumption](../../../client-api/data-subscriptions/advanced-topics/subscription-with-revisioning.mdx#simple-creation-and-consumption)
+ * [Filtering revisions](../../../client-api/data-subscriptions/advanced-topics/subscription-with-revisioning.mdx#filtering-revisions)
+ * [Projecting fields from revisions](../../../client-api/data-subscriptions/advanced-topics/subscription-with-revisioning.mdx#projecting-fields-from-revisions)
+
+
+## Regular subscription vs Revisions subscription
+
+
+
+##### Regular subscription
+* **Processed items**:
+ The subscription processes **documents** from the defined collection.
+ Only the latest version of the document is processed, even if the document has revisions.
+* **Query access scope**:
+ The subscription query running on the server has access only to the latest/current version of the documents.
+* **Data sent to client**:
+ Each item in the batch sent to the client contains a single document (or a projection of it),
+ as defined in the subscription.
+
+
+
+
+##### Revisions subscription
+* **Processed items**:
+ The subscription processes all **revisions** of documents from the defined collection,
+ including revisions of deleted documents from the revision bin if they have not been purged.
+* **Query access scope**:
+ For each revision, the subscription query running on the server has access to both the currently processed revision and its previous revision.
+* **Data sent to client**:
+ By default, unless the subscription query is [projecting specific fields](../../../client-api/data-subscriptions/advanced-topics/subscription-with-revisioning.mdx#projecting-fields-from-revisions),
+ each item in the batch sent to the client contains both the processed revision (`Result.Current`) and its preceding revision (`Result.Previous`).
+ If the document has just been created, the previous revision will be `null`.
+
+
+* In order for the revisions subscription to work,
+ [Revisions must be configured](../../../document-extensions/revisions/overview.mdx#defining-a-revisions-configuration) and enabled for the collection the subscription manages.
+
+* A document that has no revisions will Not be processed,
+ so make sure that your revisions configuration does not purge revisions before the subscription has a chance to process them.
+
+
+
+
+
+
+## Revisions processing order
+
+In the revisions subscription, revisions are processed in pairs of subsequent entries.
+For example, consider the following User document:
+
+
+
+{`\{
+ Name: "James",
+ Age: "21"
+\}
+`}
+
+
+
+We update this User document in two consecutive operations:
+
+* Update the 'Age' field to the value of 22
+* Update the 'Age' field to the value of 23
+
+The subscription worker in the client will receive pairs of revisions ( _Previous_ & _Current_ )
+within each item in the batch in the following order:
+
+| Batch item | Previous | Current |
+|------------|--------------------------------|--------------------------------|
+| item #1 | `null` | `{ Name: "James", Age: "21" }` |
+| item #2 | `{ Name: "James", Age: "21" }` | `{ Name: "James", Age: "22" }` |
+| item #3 | `{ Name: "James", Age: "22" }` | `{ Name: "James", Age: "23" }` |
+
+
+
+## Simple creation and consumption
+
+Here we set up a basic revisions subscription that will deliver pairs of consecutive _Order_ document revisions to the client:
+
+**Create subscription**:
+
+
+
+
+{`subscriptionName = store.Subscriptions.Create(
+ // Use > as the type for the processed items
+ // e.g. >
+ new SubscriptionCreationOptions>());
+`}
+
+
+
+
+{`subscriptionName = store.Subscriptions.Create(new SubscriptionCreationOptions()
+{
+ // Add (Revisions = true) to your subscription RQL
+ Query = @"From Orders (Revisions = true)"
+});
+`}
+
+
+
+
+**Consume subscription**:
+
+
+
+{`SubscriptionWorker> revisionsWorker =
+ // Specify > as the type of the processed items
+ store.Subscriptions.GetSubscriptionWorker>(subscriptionName);
+
+await revisionsWorker.Run((SubscriptionBatch> batch) =>
+\{
+ foreach (var item in batch.Items)
+ \{
+ // Access the previous revision via 'Result.Previous'
+ var previousRevision = item.Result.Previous;
+
+ // Access the current revision via 'Result.Current'
+ var currentRevision = item.Result.Current;
+
+ // Provide your own processing logic:
+ ProcessOrderRevisions(previousRevision, currentRevision);
+ \}
+\});
+`}
+
+
+
+
+
+## Filtering revisions
+
+Here we set up a revisions subscription that will send the client only document revisions in which the order was shipped to Mexico.
+
+**Create subscription**:
+
+
+
+
+{`subscriptionName = store.Subscriptions.Create(
+ // Specify > as the type of the processed items
+ new SubscriptionCreationOptions>()
+ {
+ // Provide filtering logic
+ // Only revisions that where shipped to Mexico will be sent to subscribed clients
+ Filter = revision => revision.Current.ShipTo.Country == "Mexico",
+ });
+`}
+
+
+
+
+{`subscriptionName = await store.Subscriptions.CreateAsync(new SubscriptionCreationOptions()
+{
+ Query = @"declare function isSentToMexico(doc) {
+ return doc.Current.ShipTo.Country == 'Mexico'
+ }
+
+ from 'Orders' (Revisions = true) as doc
+ where isSentToMexico(doc) == true"
+});
+`}
+
+
+
+
+**Consume subscription**:
+
+
+
+{`SubscriptionWorker> worker =
+ store.Subscriptions.GetSubscriptionWorker>(subscriptionName);
+
+await worker.Run(batch =>
+\{
+ foreach (var item in batch.Items)
+ \{
+ Console.WriteLine($@"
+ This is a revision of document \{item.Id\}.
+ The order in this revision was shipped at \{item.Result.Current.ShippedAt\}.");
+ \}
+\});
+`}
+
+
+
+
+
+## Projecting fields from revisions
+
+Here we define a revisions subscription that will filter the revisions and send projected data to the client.
+
+**Create subscription**:
+
+
+
+
+{`subscriptionName = store.Subscriptions.Create(
+ // Specify > as the type of the processed items within the query
+ new SubscriptionCreationOptions>()
+ {
+ // Filter revisions by the revenue delta.
+ // The subscription will only process revisions where the revenue
+ // is higher than in the preceding revision by 2500.
+ Filter = revision =>
+ revision.Previous != null &&
+ revision.Current.Lines.Sum(x => x.PricePerUnit * x.Quantity) >
+ revision.Previous.Lines.Sum(x => x.PricePerUnit * x.Quantity) + 2500,
+
+ // Define the projected fields that will be sent to the client
+ Projection = revision => new OrderRevenues()
+ {
+ PreviousRevenue =
+ revision.Previous.Lines.Sum(x => x.PricePerUnit * x.Quantity),
+
+ CurrentRevenue =
+ revision.Current.Lines.Sum(x => x.PricePerUnit * x.Quantity)
+ }
+ });
+`}
+
+
+
+
+{`subscriptionName = store.Subscriptions.Create(new SubscriptionCreationOptions()
+{
+ Query = @"declare function isRevenueDeltaAboveThreshold(doc, threshold) {
+ return doc.Previous !== null && doc.Current.Lines.map(function(x) {
+ return x.PricePerUnit * x.Quantity;
+ }).reduce((a, b) => a + b, 0) > doc.Previous.Lines.map(function(x) {
+ return x.PricePerUnit * x.Quantity;
+ }).reduce((a, b) => a + b, 0) + threshold
+ }
+
+ from 'Orders' (Revisions = true) as doc
+ where isRevenueDeltaAboveThreshold(doc, 2500)
+
+ select {
+ PreviousRevenue: doc.Previous.Lines.map(function(x) {
+ return x.PricePerUnit * x.Quantity;
+ }).reduce((a, b) => a + b, 0),
+
+ CurrentRevenue: doc.Current.Lines.map(function(x) {
+ return x.PricePerUnit * x.Quantity;
+ }).reduce((a, b) => a + b, 0)
+ }"
+});
+`}
+
+
+
+
+{`public class OrderRevenues
+{
+ public decimal PreviousRevenue { get; set; }
+ public decimal CurrentRevenue { get; set; }
+}
+`}
+
+
+
+
+**Consume subscription**:
+
+Since the revision fields are projected into the `OrderRevenues` class in the subscription definition,
+each item received in the batch has the format of this projected class instead of the default `Result.Previous` and `Result.Current` fields,
+as was demonstrated in the [simple example](../../../client-api/data-subscriptions/advanced-topics/subscription-with-revisioning.mdx#simple-creation-and-consumption).
+
+
+
+{`SubscriptionWorker revenuesComparisonWorker =
+ // Use the projected class type 'OrderRevenues' for the items the worker will process
+ store.Subscriptions.GetSubscriptionWorker(subscriptionName);
+
+await revenuesComparisonWorker.Run(batch =>
+\{
+ foreach (var item in batch.Items)
+ \{
+ // Access the projected content:
+ Console.WriteLine($@"Revenue for order with ID: \{item.Id\}
+ has grown from \{item.Result.PreviousRevenue\}
+ to \{item.Result.CurrentRevenue\}");
+ \}
+\});
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/data-subscriptions/advanced-topics/_subscription-with-revisioning-java.mdx b/versioned_docs/version-7.1/client-api/data-subscriptions/advanced-topics/_subscription-with-revisioning-java.mdx
new file mode 100644
index 0000000000..473554f719
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/data-subscriptions/advanced-topics/_subscription-with-revisioning-java.mdx
@@ -0,0 +1,147 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* The **Data Subscription** feature supports subscribing not only to documents, but also to [document revisions](../../../document-extensions/revisions/overview.mdx).
+
+* The revisions support is defined within the subscription.
+ A [Revisions Configuration](../../../document-extensions/revisions/client-api/operations/configure-revisions.mdx) must be defined for the subscribed collection.
+
+* While a regular subscription processes a single document, a Revisions subscription processes **pairs of subsequent document revisions**.
+
+ Using this functionality allows you to keep track of each change made in a document, as well as compare pairs of subsequent versions of the document.
+
+ Both revisions are accessible for filtering and projection.
+
+* In this page:
+ * [Revisions processing order](../../../client-api/data-subscriptions/advanced-topics/subscription-with-revisioning.mdx#revisions-processing-order)
+ * [Simple declaration and usage](../../../client-api/data-subscriptions/advanced-topics/subscription-with-revisioning.mdx#simple-declaration-and-usage)
+ * [Revisions processing and projection](../../../client-api/data-subscriptions/advanced-topics/subscription-with-revisioning.mdx#revisions-processing-and-projection)
+
+
+## Revisions processing order
+
+The Revisions feature allows the tracking of changes made in a document, by storing the audit trail of its changes over time.
+An audit trail entry is called a **Document Revision**, and is comprised of a document snapshot.
+Read more about revisions [here](../../../document-extensions/revisions/overview.mdx).
+
+In a data subscription, revisions will be processed in pairs of subsequent entries.
+For example, consider the following User document:
+
+`{
+ Name:'James',
+ Age:'21'
+}`
+
+We update the User document twice, in separate operations:
+
+* We update the 'Age' field to the value of 22
+* We update the 'Age' field to the value of 23
+
+The data subscriptions revisions processing mechanism will receive pairs of revisions in the following order:
+
+| # | Previous | Current |
+|---|------------------------------|------------------------------|
+| 1 | `null` | `{ Name:'James', Age:'21' }` |
+| 2 | `{ Name:'James', Age:'21' }` | `{ Name:'James', Age:'22' }` |
+| 3 | `{ Name:'James', Age:'22' }` | `{ Name:'James', Age:'23' }` |
+
+
+The revisions subscription will be able to function properly only if the revisions it needs to process are available.
+Please make sure that your revisions configuration doesn't purge revisions before the subscription had the chance to process them.
+
+
+
+
+## Simple declaration and usage
+
+Here we declare a simple revisions subscription that will send pairs of subsequent document revisions to the client:
+
+Creation:
+
+
+
+{`name = store.subscriptions().createForRevisions(Order.class);
+`}
+
+
+
+
+{`SubscriptionCreationOptions options = new SubscriptionCreationOptions();
+options.setQuery("from orders (Revisions = true)");
+name = store.subscriptions().createForRevisions(Order.class, options);
+`}
+
+
+
+
+Consumption:
+
+
+{`SubscriptionWorker> revisionWorker = store
+ .subscriptions().getSubscriptionWorkerForRevisions(Order.class, name);
+revisionWorker.run(x -> \{
+ for (SubscriptionBatch.Item> documentsPair : x.getItems()) \{
+
+ Order prev = documentsPair.getResult().getPrevious();
+ Order current = documentsPair.getResult().getCurrent();
+
+ processOrderChanges(prev, current);
+ \}
+\});
+`}
+
+
+
+
+
+## Revisions processing and projection
+
+Here we declare a revisions subscription that will filter and project data from revisions pairs:
+
+Creation:
+
+
+{`SubscriptionCreationOptions options = new SubscriptionCreationOptions();
+options.setQuery("declare function getOrderLinesSum(doc) \{" +
+ " var sum = 0;" +
+ " for (var i in doc.Lines) \{ sum += doc.Lines[i]; \} " +
+ " return sum;" +
+ "\}" +
+ "" +
+ " from orders (Revisions = true) " +
+ " where getOrderLinesSum(this.Current) > getOrderLinesSum(this.Previous) " +
+ " select \{" +
+ " previousRevenue: getOrderLinesSum(this.Previous)," +
+ " currentRevenue: getOrderLinesSum(this.Current)" +
+ "\}");
+
+name = store.subscriptions().create(options);
+`}
+
+
+
+Consumption:
+
+
+{`SubscriptionWorker> revisionWorker = store
+ .subscriptions().getSubscriptionWorkerForRevisions(Order.class, name);
+revisionWorker.run(x -> \{
+ for (SubscriptionBatch.Item> documentsPair : x.getItems()) \{
+
+ Order prev = documentsPair.getResult().getPrevious();
+ Order current = documentsPair.getResult().getCurrent();
+
+ processOrderChanges(prev, current);
+ \}
+\});
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/data-subscriptions/advanced-topics/_subscription-with-revisioning-nodejs.mdx b/versioned_docs/version-7.1/client-api/data-subscriptions/advanced-topics/_subscription-with-revisioning-nodejs.mdx
new file mode 100644
index 0000000000..4da675704b
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/data-subscriptions/advanced-topics/_subscription-with-revisioning-nodejs.mdx
@@ -0,0 +1,287 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* When the [Revisions feature](../../../document-extensions/revisions/overview.mdx) is enabled, a document revision is created with each change made to the document.
+ Each revision contains a snapshot of the document at the time of modification, forming a complete audit trail.
+
+* The **Data Subscription** feature supports subscribing not only to documents but also to their **revisions**.
+ This functionality allows the subscribed client to track changes made to documents over time.
+
+* The revisions support is specified within the subscription definition.
+ See how to create and consume it in the examples below.
+
+* In this page:
+ * [Regular subscription vs Revisions subscription](../../../client-api/data-subscriptions/advanced-topics/subscription-with-revisioning.mdx#regular-subscription-vs-revisions-subscription)
+ * [Revisions processing order](../../../client-api/data-subscriptions/advanced-topics/subscription-with-revisioning.mdx#revisions-processing-order)
+ * [Simple creation and consumption](../../../client-api/data-subscriptions/advanced-topics/subscription-with-revisioning.mdx#simple-creation-and-consumption)
+ * [Filtering revisions](../../../client-api/data-subscriptions/advanced-topics/subscription-with-revisioning.mdx#filtering-revisions)
+ * [Projecting fields from revisions](../../../client-api/data-subscriptions/advanced-topics/subscription-with-revisioning.mdx#projecting-fields-from-revisions)
+
+
+## Regular subscription vs Revisions subscription
+
+
+
+##### Regular subscription
+* **Processed items**:
+ The subscription processes **documents** from the defined collection.
+ Only the latest version of the document is processed, even if the document has revisions.
+* **Query access scope**:
+ The subscription query running on the server has access only to the latest/current version of the documents.
+* **Data sent to client**:
+ Each item in the batch sent to the client contains a single document (or a projection of it),
+ as defined in the subscription.
+
+
+
+
+##### Revisions subscription
+* **Processed items**:
+ The subscription processes all **revisions** of documents from the defined collection,
+ including revisions of deleted documents from the revision bin if they have not been purged.
+* **Query access scope**:
+ For each revision, the subscription query running on the server has access to both the currently processed revision and its previous revision.
+* **Data sent to client**:
+ By default, unless the subscription query is projecting specific fields,
+ each item in the batch sent to the client contains both the processed revision (`result.current`) and its preceding revision (`result.previous`).
+ If the document has just been created, the previous revision will be `null`.
+
+
+* In order for the revisions subscription to work,
+ [Revisions must be configured](../../../document-extensions/revisions/overview.mdx#defining-a-revisions-configuration) and enabled for the collection the subscription manages.
+
+* A document that has no revisions will Not be processed,
+ so make sure that your revisions configuration does not purge revisions before the subscription has a chance to process them.
+
+
+
+
+
+
+## Revisions processing order
+
+In the revisions subscription, revisions are processed in pairs of subsequent entries.
+For example, consider the following User document:
+
+
+
+{`\{
+ Name: "James",
+ Age: "21"
+\}
+`}
+
+
+
+We update this User document in two consecutive operations:
+
+* Update the 'Age' field to the value of 22
+* Update the 'Age' field to the value of 23
+
+The subscription worker in the client will receive pairs of revisions ( _previous_ & _current_ )
+within each item in the batch in the following order:
+
+| Batch item | Previous | Current |
+|------------|--------------------------------|--------------------------------|
+| item #1 | `null` | `{ Name: "James", Age: "21" }` |
+| item #2 | `{ Name: "James", Age: "21" }` | `{ Name: "James", Age: "22" }` |
+| item #3 | `{ Name: "James", Age: "22" }` | `{ Name: "James", Age: "23" }` |
+
+
+
+## Simple creation and consumption
+
+Here we set up a basic revisions subscription that will deliver pairs of consecutive _Order_ document revisions to the client:
+
+**Create subscription**:
+
+
+
+{`const subscriptionName = await documentStore.subscriptions.create(\{
+ // Add (Revisions = true) to your subscription RQL
+ query: "From Orders (Revisions = true)"
+\});
+`}
+
+
+
+**Consume subscription**:
+
+
+
+{`const workerOptions = \{ subscriptionName \};
+
+const worker =
+ // Use method \`getSubscriptionWorkerForRevisions\`
+ documentStore.subscriptions.getSubscriptionWorkerForRevisions(workerOptions);
+
+worker.on("batch", (batch, callback) => \{
+ try \{
+ for (const item of batch.items) \{
+
+ // Access the previous revision via 'result.previous'
+ const previousRevision = item.result.previous;
+
+ // Access the current revision via 'result.current'
+ const currentRevision = item.result.current;
+ \}
+ callback();
+
+ \} catch (err) \{
+ callback(err);
+ \}
+\});
+`}
+
+
+
+
+
+## Filtering revisions
+
+Here we set up a revisions subscription that will send the client only document revisions in which the order was shipped to Mexico.
+
+**Create subscription**:
+
+
+
+{`const subscriptionName = await documentStore.subscriptions.create(\{
+ // Provide filtering logic
+ // Only revisions that where shipped to Mexico will be sent to subscribed clients
+ query: \`declare function isSentToMexico(doc) \{
+ return doc.Current.ShipTo.Country == 'Mexico'
+ \}
+
+ from 'Orders' (Revisions = true) as doc
+ where isSentToMexico(doc) == true\`
+\});
+`}
+
+
+
+**Consume subscription**:
+
+
+
+{`const workerOptions = \{ subscriptionName \};
+
+const worker =
+ documentStore.subscriptions.getSubscriptionWorkerForRevisions(workerOptions);
+
+worker.on("batch", (batch, callback) => \{
+ try \{
+ for (const item of batch.items) \{
+ console.log(\`
+ This is a revision of document $\{item.id\}.
+ The order in this revision was shipped at $\{item.result.current.ShippedAt\}.
+ \`);
+ \}
+ callback();
+
+ \} catch (err) \{
+ callback(err);
+ \}
+\});
+`}
+
+
+
+
+
+## Projecting fields from revisions
+
+Here we define a revisions subscription that will filter the revisions and send projected data to the client.
+
+**Create subscription**:
+
+
+
+
+{`const subscriptionName = await documentStore.subscriptions.create({
+ // Filter revisions by the revenue delta.
+ // The subscription will only process revisions where the revenue
+ // is higher than in the preceding revision by 2500.
+
+ query: \`declare function isRevenueDeltaAboveThreshold(doc, threshold) {
+ return doc.Previous !== null && doc.Current.Lines.map(function(x) {
+ return x.PricePerUnit * x.Quantity;
+ }).reduce((a, b) => a + b, 0) > doc.Previous.Lines.map(function(x) {
+ return x.PricePerUnit * x.Quantity;
+ }).reduce((a, b) => a + b, 0) + threshold
+ }
+
+ from 'Orders' (Revisions = true) as doc
+ where isRevenueDeltaAboveThreshold(doc, 2500)
+
+ // Define the projected fields that will be sent to the client:
+ select {
+ previousRevenue: doc.Previous.Lines.map(function(x) {
+ return x.PricePerUnit * x.Quantity;
+ }).reduce((a, b) => a + b, 0),
+
+ currentRevenue: doc.Current.Lines.map(function(x) {
+ return x.PricePerUnit * x.Quantity;
+ }).reduce((a, b) => a + b, 0)
+ }\`
+});
+`}
+
+
+
+
+{`class OrderRevenues {
+ constructor() {
+ this.previousRevenue;
+ this.currentRevenue;
+ }
+}
+`}
+
+
+
+
+**Consume subscription**:
+
+Since the revision fields are projected into the `OrderRevenues` class in the subscription definition,
+each item received in the batch has the format of this projected class instead of the default `result.previous` and `result.current` fields,
+as was demonstrated in the [simple example](../../../client-api/data-subscriptions/advanced-topics/subscription-with-revisioning.mdx#simple-creation-and-consumption).
+
+
+
+{`const workerOptions = \{
+ subscriptionName: subscriptionName,
+ documentType: OrderRevenues
+\};
+
+const worker =
+ // Note: in this case, where each resulting item in the batch is a projected object
+ // and not the revision itself, we use method \`getSubscriptionWorker\`
+ documentStore.subscriptions.getSubscriptionWorker(workerOptions);
+
+worker.on("batch", (batch, callback) => \{
+ try \{
+ for (const item of batch.items) \{
+ // Access the projected content:
+ console.log(\`
+ Revenue for order with ID: $\{item.id\}
+ has grown from $\{item.result.previousRevenue\}
+ to $\{item.result.currentRevenue\}
+ \`);
+ \}
+ callback();
+
+ \} catch (err) \{
+ callback(err);
+ \}
+\});
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/data-subscriptions/advanced-topics/maintenance-operations.mdx b/versioned_docs/version-7.1/client-api/data-subscriptions/advanced-topics/maintenance-operations.mdx
new file mode 100644
index 0000000000..02933882f5
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/data-subscriptions/advanced-topics/maintenance-operations.mdx
@@ -0,0 +1,44 @@
+---
+title: "Data Subscriptions: Maintenance Operations"
+hide_table_of_contents: true
+sidebar_label: Maintenance Operations
+sidebar_position: 1
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import MaintenanceOperationsCsharp from './_maintenance-operations-csharp.mdx';
+import MaintenanceOperationsJava from './_maintenance-operations-java.mdx';
+import MaintenanceOperationsNodejs from './_maintenance-operations-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/data-subscriptions/advanced-topics/subscription-with-revisioning.mdx b/versioned_docs/version-7.1/client-api/data-subscriptions/advanced-topics/subscription-with-revisioning.mdx
new file mode 100644
index 0000000000..146e1cbf51
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/data-subscriptions/advanced-topics/subscription-with-revisioning.mdx
@@ -0,0 +1,48 @@
+---
+title: "Data Subscriptions: Revisions Support"
+hide_table_of_contents: true
+sidebar_label: Revisions Support
+sidebar_position: 0
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import SubscriptionWithRevisioningCsharp from './_subscription-with-revisioning-csharp.mdx';
+import SubscriptionWithRevisioningJava from './_subscription-with-revisioning-java.mdx';
+import SubscriptionWithRevisioningNodejs from './_subscription-with-revisioning-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/data-subscriptions/assets/SubscriptionsDocumentProcessing.png b/versioned_docs/version-7.1/client-api/data-subscriptions/assets/SubscriptionsDocumentProcessing.png
new file mode 100644
index 0000000000..f1fc7883d2
Binary files /dev/null and b/versioned_docs/version-7.1/client-api/data-subscriptions/assets/SubscriptionsDocumentProcessing.png differ
diff --git a/versioned_docs/version-7.1/client-api/data-subscriptions/concurrent-subscriptions.mdx b/versioned_docs/version-7.1/client-api/data-subscriptions/concurrent-subscriptions.mdx
new file mode 100644
index 0000000000..3ea98edbd7
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/data-subscriptions/concurrent-subscriptions.mdx
@@ -0,0 +1,34 @@
+---
+title: "Concurrent Subscriptions"
+hide_table_of_contents: true
+sidebar_label: Concurrent Subscriptions
+sidebar_position: 4
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import ConcurrentSubscriptionsCsharp from './_concurrent-subscriptions-csharp.mdx';
+import ConcurrentSubscriptionsNodejs from './_concurrent-subscriptions-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/data-subscriptions/consumption/_api-overview-csharp.mdx b/versioned_docs/version-7.1/client-api/data-subscriptions/consumption/_api-overview-csharp.mdx
new file mode 100644
index 0000000000..9644ddb560
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/data-subscriptions/consumption/_api-overview-csharp.mdx
@@ -0,0 +1,262 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* In this page:
+ * [Create the subscription worker](../../../client-api/data-subscriptions/consumption/api-overview.mdx#create-the-subscription-worker)
+ * [SubscriptionWorkerOptions](../../../client-api/data-subscriptions/consumption/api-overview.mdx#subscriptionworkeroptions)
+ * [Run the subscription worker](../../../client-api/data-subscriptions/consumption/api-overview.mdx#run-the-subscription-worker)
+ * [SubscriptionBatch<T>](../../../client-api/data-subscriptions/consumption/api-overview.mdx#subscriptionbatcht)
+ * [SubscriptionBatch<T>.Item](../../../client-api/data-subscriptions/consumption/api-overview.mdx#subscriptionbatchtitem)
+ * [SubscriptionWorker<T>](../../../client-api/data-subscriptions/consumption/api-overview.mdx#subscriptionworkert)
+
+
+## Create the subscription worker
+
+A subscription worker can be created using the following `GetSubscriptionWorker` methods available through the `Subscriptions` property of the `DocumentStore`.
+
+Note: Simply creating the worker is insufficient;
+after creating the worker, you need to [run the subscription worker](../../../client-api/data-subscriptions/consumption/api-overview.mdx#run-the-subscription-worker) to initiate document processing.
+
+
+
+{`SubscriptionWorker GetSubscriptionWorker(
+ string subscriptionName, string database = null);
+
+SubscriptionWorker GetSubscriptionWorker(
+ SubscriptionWorkerOptions options, string database = null);
+
+SubscriptionWorker GetSubscriptionWorker(
+ string subscriptionName, string database = null) where T : class;
+
+SubscriptionWorker GetSubscriptionWorker(
+ SubscriptionWorkerOptions options, string database = null) where T : class;
+`}
+
+
+
+| Parameter | Type | Description |
+|----------------------|-----------------------------|--------------------------------------------------------------------------------------------------------------------------------------------|
+| **subscriptionName** | `string` | The name of the subscription to which the worker will connect. |
+| **options** | `SubscriptionWorkerOptions` | Options that affect how the worker interacts with the subscription. These options do not alter the definition of the subscription itself. |
+| **database** | `string` | The name of the database where the subscription task resides. If `null`, the default database configured in DocumentStore will be used. |
+
+| Return value | |
+|----------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------|
+| `SubscriptionWorker` | The subscription worker that has been created. Initially, it is idle and will only start processing documents when the `Run` function is called. |
+
+
+
+## SubscriptionWorkerOptions
+
+
+
+{`public class SubscriptionWorkerOptions
+\{
+ public string SubscriptionName \{ get; \}
+ public int MaxDocsPerBatch \{ get; set; \}
+ public int SendBufferSizeInBytes \{ get; set; \}
+ public int ReceiveBufferSizeInBytes \{ get; set; \}
+ public bool IgnoreSubscriberErrors \{ get; set; \}
+ public bool CloseWhenNoDocsLeft \{ get; set; \}
+ public TimeSpan TimeToWaitBeforeConnectionRetry \{ get; set; \}
+ public TimeSpan ConnectionStreamTimeout \{ get; set; \}
+ public TimeSpan MaxErroneousPeriod \{ get; set; \}
+ public SubscriptionOpeningStrategy Strategy \{ get; set; \}
+\}
+`}
+
+
+
+When creating a worker with `SubscriptionWorkerOptions`, the only mandatory property is `SubscriptionName`.
+All other parameters are optional and will default to their respective default values if not specified.
+
+| Member | Type | Description |
+|-------------------------------------|-------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **SubscriptionName** | `string` | The name of the subscription to which the worker will connect. |
+| **MaxDocsPerBatch** | `int` | The maximum number of documents that the server will try to retrieve and send to the client in a batch. If the server doesn't find as many documents as specified, it will send the documents it has found without waiting. Default: 4096. |
+| **SendBufferSizeInBytes** | `int` | The size in bytes of the TCP socket buffer used for _sending_ data. Default: 32,768 bytes (32 KiB). |
+| **ReceiveBufferSizeInBytes** | `int` | The size in bytes of the TCP socket buffer used for _receiving_ data. Default: 4096 (4 KiB). |
+| **IgnoreSubscriberErrors** | `bool` | Determines if subscription processing is aborted when the worker's batch-handling code throws an unhandled exception.
`true` – subscription processing will continue.
`false` (Default) – subscription processing will be aborted. |
+| **CloseWhenNoDocsLeft** | `bool` | Determines whether the subscription connection closes when no new documents are available.
`true` – The subscription worker processes all available documents and stops when none remain, at which point the `Run` method throws a `SubscriptionClosedException`. Useful for ad-hoc, one-time processing.
`false` (Default) – The subscription worker remains active, waiting for new documents. |
+| **TimeToWaitBeforeConnectionRetry** | `TimeSpan` | The time to wait before attempting to reconnect after a non-aborting failure during subscription processing. Default: 5 seconds. |
+| **MaxErroneousPeriod** | `TimeSpan` | The maximum amount of time a subscription connection can remain in an erroneous state before it is terminated. Default: 5 minutes. |
+| **Strategy** | `SubscriptionOpeningStrategy` | This enum configures how the server handles connection attempts from workers to a specific subscription task. Default: `OpenIfFree`. |
+
+Learn more about `SubscriptionOpeningStrategy` in [worker strategies](../../../client-api/data-subscriptions/consumption/how-to-consume-data-subscription.mdx#worker-strategies).
+
+
+
+{`public enum SubscriptionOpeningStrategy
+\{
+ // Connect if no other worker is connected
+ OpenIfFree,
+
+ // Take over the connection
+ TakeOver,
+
+ // Wait for currently connected worker to disconnect
+ WaitForFree,
+
+ // Connect concurrently
+ Concurrent
+\}
+`}
+
+
+
+
+
+## Run the subscription worker
+
+After [creating](../../../client-api/data-subscriptions/consumption/api-overview.mdx#create-the-subscription-worker) a subscription worker, the subscription worker is still not processing any documents.
+To start processing, you need to call the `Run` method of the [SubscriptionWorker](../../../client-api/data-subscriptions/consumption/api-overview.mdx#subscriptionworkert).
+
+The `Run` function takes a delegate, which is your client-side code responsible for processing the received document batches.
+
+
+
+{`Task Run(Action> processDocuments,
+ CancellationToken ct = default(CancellationToken));
+
+Task Run(Func, Task> processDocuments,
+ CancellationToken ct = default(CancellationToken));
+`}
+
+
+
+| Parameter | Type | Description |
+|----------------------|------------------------------------|----------------------------------------------------------------|
+| **processDocuments** | `Action>` | Delegate for sync batches processing. |
+| **processDocuments** | `Func, Task>` | Delegate for async batches processing. |
+| **ct** | `CancellationToken` | Cancellation token used in order to halt the worker operation. |
+
+| Return value | |
+|---------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `Task` | Task that is alive as long as the subscription worker is processing or tries processing. If the processing is aborted, the task exits with an exception. |
+
+
+
+## SubscriptionBatch<T>
+
+| Member | Type | Description |
+|--------------------------|-----------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **Items** | `List.Item>` | List of items in the batch. See [SubscriptionBatch<T>.Item](../../../client-api/data-subscriptions/consumption/api-overview.mdx#subscriptionbatchtitem) below. |
+| **NumberOfItemsInBatch** | `int` | Number of items in the batch. |
+
+| Method Signature | Return value | Description |
+|------------------------|-------------------------|-------------------------------------------------------------------------------------------------------------------|
+| **OpenSession()** | `IDocumentSession` | Open a new document session that tracks all items and their included items within the current batch. |
+| **OpenAsyncSession()** | `IAsyncDocumentSession` | Open a new asynchronous document session that tracks all items and their included items within the current batch. |
+
+
+
+##### Subscription worker connectivity
+
+As long as there is no exception, the worker will continue addressing the same server that the first batch was received from.
+If the worker fails to reach that node, it will try to [failover](../../../client-api/configuration/load-balance/overview.mdx) to another node from the session's topology list.
+The node that the worker succeeded connecting to, will inform the worker which node is currently responsible for data subscriptions.
+
+
+
+
+
+## SubscriptionBatch<T>.Item
+
+This class represents a single item in a subscription batch results.
+
+
+
+{`public struct Item
+\{
+ public T Result \{ get; internal set; \}
+ public string ExceptionMessage \{ get; internal set; \}
+ public string Id \{ get; internal set; \}
+ public string ChangeVector \{ get; internal set; \}
+ public bool Projection \{ get; internal set; \}
+ public bool Revision \{ get; internal set; \}
+ public BlittableJsonReaderObject RawResult \{ get; internal set; \}
+ public BlittableJsonReaderObject RawMetadata \{ get; internal set; \}
+ public IMetadataDictionary Metadata \{ get; internal set; \}
+\}
+`}
+
+
+
+| Member | Type | Description |
+|----------------------|-----------------------------|-------------------------------------------------------------------------------------------------------|
+| **Result** | `T` | The current batch item. If `T` is `BlittableJsonReaderObject`, no deserialization will take place. |
+| **ExceptionMessage** | `string` | The exception message thrown during current document processing in the server side. |
+| **Id** | `string` | The document ID of the underlying document for the current batch item. |
+| **ChangeVector** | `string` | The change vector of the underlying document for the current batch item. |
+| **RawResult** | `BlittableJsonReaderObject` | Current batch item before serialization to `T`. |
+| **RawMetadata** | `BlittableJsonReaderObject` | Current batch item's underlying document metadata. |
+| **Metadata** | `IMetadataDictionary` | Current batch item's underlying metadata values. |
+
+
+This class should only be used within the subscription's `Run` delegate.
+Using it outside this scope may cause unexpected behavior.
+
+
+
+
+## SubscriptionWorker<T>
+
+
+
+##### Methods
+
+| Method Signature | Return Type | Description |
+|------------------------------------------------|---------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **Dispose()** | `void` | Aborts subscription worker operation ungracefully by waiting for the task returned by the `Run` function to finish running. |
+| **DisposeAsync()** | `Task` | Async version of `Dispose()`. |
+| **Dispose(bool waitForSubscriptionTask)** | `void` | Aborts the subscription worker, but allows deciding whether to wait for the `Run` function task or not. |
+| **DisposeAsync(bool waitForSubscriptionTask)** | `Task` | Async version of `DisposeAsync(bool waitForSubscriptionTask)`. |
+| **Run (multiple overloads)** | `Task` | Call `Run` to begin the worker's batch processing. Pass the batch processing delegates to this method (see [above](../../../client-api/data-subscriptions/consumption/api-overview.mdx#run-the-subscription-worker)). |
+
+
+
+
+
+##### Events
+
+| Event | Event type | Description |
+|-----------------------------------|:--------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **AfterAcknowledgment** | `AfterAcknowledgmentAction` | Triggered after each time the server acknowledges the progress of batch processing. |
+| **OnSubscriptionConnectionRetry** | `Action` | Triggered when the subscription worker attempts to reconnect to the server after a failure. The event receives as a parameter the exception that interrupted the processing. |
+| **OnDisposed** | `Action>` | Triggered after the subscription worker is disposed. |
+
+
+
+##### AfterAcknowledgmentAction
+
+| Parameter | | |
+|-------------|------------------------|------------------------------------------|
+| **batch** | `SubscriptionBatch` | The batch process which was acknowledged |
+
+| Return value | |
+|----------------|---------------------------------------------------------------------------------------------------------|
+| `Task` | Task for which the worker will wait for the event processing to be finished (for async functions, etc.) |
+
+
+
+
+
+
+
+##### Properties
+
+| Member | Type | Description |
+|-------------------------------|----------|-----------------------------------------------------------------------|
+| **CurrentNodeTag** | `string` | The node tag of the current RavenDB server handling the subscription. |
+| **SubscriptionName** | `string` | The name of the currently processed subscription. |
+| **WorkerId** | `string` | The worker ID. |
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/data-subscriptions/consumption/_api-overview-java.mdx b/versioned_docs/version-7.1/client-api/data-subscriptions/consumption/_api-overview-java.mdx
new file mode 100644
index 0000000000..d99288db9a
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/data-subscriptions/consumption/_api-overview-java.mdx
@@ -0,0 +1,175 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* In this page:
+ * [Create the subscription worker](../../../client-api/data-subscriptions/consumption/api-overview.mdx#create-the-subscription-worker)
+ * [SubscriptionWorkerOptions](../../../client-api/data-subscriptions/consumption/api-overview.mdx#subscriptionworkeroptions)
+ * [Run the subscription worker](../../../client-api/data-subscriptions/consumption/api-overview.mdx#run-the-subscription-worker)
+ * [SubscriptionBatch<T>](../../../client-api/data-subscriptions/consumption/api-overview.mdx#subscriptionbatcht)
+ * [SubscriptionWorker<T>](../../../client-api/data-subscriptions/consumption/api-overview.mdx#subscriptionworkert)
+
+
+## Create the subscription worker
+
+Subscription worker generation is accessible through the `DocumentStore`'s `subscriptions()` method, of type `DocumentSubscriptions`:
+
+
+{`SubscriptionWorker getSubscriptionWorker(SubscriptionWorkerOptions options);
+SubscriptionWorker getSubscriptionWorker(SubscriptionWorkerOptions options, String database);
+
+SubscriptionWorker getSubscriptionWorker(String subscriptionName);
+SubscriptionWorker getSubscriptionWorker(String subscriptionName, String database);
+
+ SubscriptionWorker getSubscriptionWorker(Class clazz, SubscriptionWorkerOptions options);
+ SubscriptionWorker getSubscriptionWorker(Class clazz, SubscriptionWorkerOptions options, String database);
+
+ SubscriptionWorker getSubscriptionWorker(Class clazz, String subscriptionName);
+ SubscriptionWorker getSubscriptionWorker(Class clazz, String subscriptionName, String database);
+
+ SubscriptionWorker> getSubscriptionWorkerForRevisions(Class clazz, SubscriptionWorkerOptions options);
+ SubscriptionWorker> getSubscriptionWorkerForRevisions(Class clazz, SubscriptionWorkerOptions options, String database);
+
+ SubscriptionWorker> getSubscriptionWorkerForRevisions(Class clazz, String subscriptionName);
+ SubscriptionWorker> getSubscriptionWorkerForRevisions(Class clazz, String subscriptionName, String database);
+`}
+
+
+
+| Parameter | | |
+|----------------------|-----------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **subscriptionName** | `String` | The subscription's name. This parameter appears in more simple overloads allowing to start processing without creating a `SubscriptionCreationOptions` instance, relying on the default values |
+| **options** | `SubscriptionWorkerOptions` | Options that affect how the worker interacts with the subscription. These options do not alter the definition of the subscription itself. |
+| **database** | `String` | The name of the database where the subscription task resides. If `null`, the default database configured in DocumentStore will be used. |
+
+| Return value | |
+|----------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------|
+| `SubscriptionWorker` | The subscription worker that has been created. Initially, it is idle and will only start processing documents when the `run` function is called. |
+
+
+
+
+## SubscriptionWorkerOptions
+
+When creating a worker with `SubscriptionWorkerOptions`, the only mandatory property is `subscriptionName`.
+All other parameters are optional and will default to their respective default values if not specified.
+
+
+| Member | Type | Description |
+|-------------------------------------|-----------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **subscriptionName** | `String` | The name of the subscription to which the worker will connect. |
+| **timeToWaitBeforeConnectionRetry** | `Duration` | The time to wait before attempting to reconnect after a non-aborting failure during subscription processing. Default: 5 seconds. |
+| **ignoreSubscriberErrors** | `boolean` | Determines if subscription processing is aborted when the worker's batch-handling code throws an unhandled exception.
`true` – subscription processing will continue.
`false` (Default) – subscription processing will be aborted. |
+| **strategy** | `SubscriptionOpeningStrategy` (enum) | Configures how the server handles connection attempts from workers to a specific subscription task. Learn more in [worker strategies](../../../client-api/data-subscriptions/consumption/how-to-consume-data-subscription.mdx#worker-strategies). Default: `OPEN_IF_FREE`. |
+| **maxDocsPerBatch** | `int` | The maximum number of documents that the server will try to retrieve and send to the client in a batch. If the server doesn't find as many documents as specified, it will send the documents it has found without waiting. Default: 4096. |
+| **closeWhenNoDocsLeft** | `boolean` | Determines whether the subscription connection closes when no new documents are available.
`true` – The subscription worker processes all available documents and stops when none remain, at which point the `run` method throws a `SubscriptionClosedException`. Useful for ad-hoc, one-time processing.
`false` (Default) – The subscription worker remains active, waiting for new documents. |
+| **sendBufferSizeInBytes** | `int` | The size in bytes of the TCP socket buffer used for _sending_ data. Default: 32,768 bytes (32 KiB). |
+| **receiveBufferSizeInBytes** | `int` | The size in bytes of the TCP socket buffer used for _receiving_ data. Default: 4096 (4 KiB). |
+
+
+
+## Run the subscription worker
+
+After [creating](../../../client-api/data-subscriptions/consumption/api-overview.mdx#create-the-subscription-worker) a subscription worker, the subscription worker is still not processing any documents.
+To start processing, you need to call the `run` method of the [SubscriptionWorker](../../../client-api/data-subscriptions/consumption/api-overview.mdx#subscriptionworkert).
+
+The `run` function takes a delegate, which is your client-side code responsible for processing the received document batches.
+
+
+
+{`CompletableFuture run(Consumer> processDocuments);
+`}
+
+
+
+| Parameter | | |
+|----------------------|----------------------------------|--------------------------------------|
+| **processDocuments** | `Consumer>` | Delegate for sync batches processing |
+
+| Return value | |
+|---------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `CompletableFuture` | Task that is alive as long as the subscription worker is processing or tries processing. If the processing is aborted, the future exits with an exception |
+
+
+
+
+## SubscriptionBatch<T>
+
+| Member | Type | Description |
+|--------------------------|-----------------------------------|-------------------------------|
+| **items** | `List.Item>` | List of items in the batch. |
+| **numberOfItemsInBatch** | `int` | Number of items in the batch. |
+
+| Method Signature | Return value | Description |
+|--------------------|--------------------|--------------------------------------------------------------------------------------|
+| **openSession()** | `IDocumentSession` | New document session, that tracks all items and included items of the current batch. |
+
+
+
+
+As long as there is no exception, the worker will continue addressing the same
+server that the first batch was received from.
+If the worker fails to reach that node, it will try to failover to another node
+from the session's topology list.
+The node that the worker succeeded connecting to, will inform the worker which
+node is currently responsible for data subscriptions.
+
+
+
+
+
+
+
+if T is `ObjectNode`, no deserialization will take place
+
+
+| Member | Type | Description |
+|----------------------|-----------------------|----------------------------------------------------------------------------------------|
+| **result** | `T` | Current batch item. |
+| **exceptionMessage** | `String` | Message of the exception thrown during current document processing in the server side. |
+| **id** | `String` | Current batch item's underlying document ID. |
+| **changeVector** | `String` | Current batch item's underlying document change vector of the current document. |
+| **rawResult** | `ObjectNode` | Current batch item before serialization to `T`. |
+| **rawMetadata** | `ObjectNode` | Current batch item's underlying document metadata. |
+| **metadata** | `IMetadataDictionary` | Current batch item's underlying metadata values. |
+
+
+
+
+
+## SubscriptionWorker<T>
+
+
+
+| Method Signature | Return Type | Description |
+|------------------------------|---------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **close()** | `void` | Aborts subscription worker operation ungracefully by waiting for the task returned by the `run` function to finish running. |
+| **run (multiple overloads)** | `CompletableFuture` | Call `run` to begin the worker's batch processing. Pass the batch processing delegates to this method (see [above](../../../client-api/data-subscriptions/consumption/api-overview.mdx#run-the-subscription-worker)). |
+
+
+
+
+
+| Event | Type\Return type | Description |
+|------------------------------------|-------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **addAfterAcknowledgmentListener** | `Consumer>` (event) | Event that is risen after each the server acknowledges batch processing progress. |
+| **onSubscriptionConnectionRetry** | `Consumer` (event) | Event that is fired when the subscription worker tries to reconnect to the server after a failure. The event receives as a parameter the exception that interrupted the processing. |
+| **onClosed** | `Consumer>` (event) | Event that is fired after the subscription worker was disposed. |
+
+
+
+
+
+| Member | Type\Return type | Description |
+|----------------------|--------------------|-----------------------------------------------------------------------|
+| **currentNodeTag** | `String` | The node tag of the current RavenDB server handling the subscription. |
+| **subscriptionName** | `String` | The name of the currently processed subscription. |
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/data-subscriptions/consumption/_api-overview-nodejs.mdx b/versioned_docs/version-7.1/client-api/data-subscriptions/consumption/_api-overview-nodejs.mdx
new file mode 100644
index 0000000000..948886203a
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/data-subscriptions/consumption/_api-overview-nodejs.mdx
@@ -0,0 +1,212 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* In this page:
+ * [Create the subscription worker](../../../client-api/data-subscriptions/consumption/api-overview.mdx#create-the-subscription-worker)
+ * [Subscription worker options](../../../client-api/data-subscriptions/consumption/api-overview.mdx#subscription-worker-options)
+ * [Run the subscription worker](../../../client-api/data-subscriptions/consumption/api-overview.mdx#run-the-subscription-worker)
+ * [Subscription batch](../../../client-api/data-subscriptions/consumption/api-overview.mdx#subscription-batch)
+ * [Subscription batch item](../../../client-api/data-subscriptions/consumption/api-overview.mdx#subscription-batch-item)
+ * [Subscription worker](../../../client-api/data-subscriptions/consumption/api-overview.mdx#subscription-worker)
+
+
+## Create the subscription worker
+
+A subscription worker can be created using the following `getSubscriptionWorker` methods available through the `subscriptions` property of the `documentStore`.
+
+Note: Simply creating the worker is insufficient;
+after creating the worker, you need to [run the subscription worker](../../../client-api/data-subscriptions/consumption/api-overview.mdx#run-the-subscription-worker) to initiate document processing.
+
+
+
+{`await documentStore.subscriptions.getSubscriptionWorker(subscriptionName);
+await documentStore.subscriptions.getSubscriptionWorker(subscriptionName, database);
+
+await documentStore.subscriptions.getSubscriptionWorker(options);
+await documentStore.subscriptions.getSubscriptionWorker(options, database);
+`}
+
+
+
+| Parameter | Type | Description |
+|----------------------|----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **subscriptionName** | `string` | The name of the subscription to which the worker will connect. |
+| **database** | `string` | The name of the database where the subscription task resides. If `null`, the default database configured in DocumentStore will be used. |
+| **options** | `object` | [Subscription worker options](../../../client-api/data-subscriptions/consumption/api-overview.mdx#subscription-worker-options) object that affect how the worker interacts with the subscription. These options do not alter the definition of the subscription itself. |
+
+| Return value | |
+|----------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `SubscriptionWorker` | The [subscription worker](../../../client-api/data-subscriptions/consumption/api-overview.mdx#subscription-worker) that has been created. The worker will start processing documents once you define the worker's `on` method, which listens to the `batch` event. |
+
+
+
+## Subscription worker options
+
+
+
+{`// The SubscriptionWorkerOptions object:
+// =====================================
+\{
+ subscriptionName;
+ documentType;
+ ignoreSubscriberErrors;
+ closeWhenNoDocsLeft;
+ maxDocsPerBatch;
+ timeToWaitBeforeConnectionRetry;
+ maxErroneousPeriod;
+ strategy;
+\}
+`}
+
+
+
+When creating a worker with subscription worker options, the only mandatory property is `subscriptionName`.
+All other parameters are optional and will default to their respective default values if not specified.
+
+| Member | Type | Description |
+|-------------------------------------|-----------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **subscriptionName** | `string` | The name of the subscription to which the worker will connect. |
+| **documentType** | `object` | The class type of the subscription documents. |
+| **ignoreSubscriberErrors** | `boolean` | Determines if subscription processing is aborted when the worker's batch-handling code throws an unhandled exception.
`true` – subscription processing will continue.
`false` (default) – subscription processing will be aborted. |
+| **closeWhenNoDocsLeft** | `boolean` | Determines whether the subscription connection closes when no new documents are available.
`true` – The subscription worker processes all available documents and stops when none remain, at which point the `SubscriptionClosedException` will be thrown. Useful for ad-hoc, one-time processing.
`false` (default) – The subscription worker remains active, waiting for new documents. |
+| **maxDocsPerBatch** | `number` | The maximum number of documents that the server will try to retrieve and send to the client in a batch. If the server doesn't find as many documents as specified, it will send the documents it has found without waiting. Default: 4096. |
+| **timeToWaitBeforeConnectionRetry** | `number` | The time (in ms) to wait before attempting to reconnect after a non-aborting failure during subscription processing. Default: 5 seconds. |
+| **maxErroneousPeriod** | `number` | The maximum amount of time (in ms) a subscription connection can remain in an erroneous state before it is terminated. Default: 5 minutes. |
+| **strategy** | `string` | The strategy configures how the server handles connection attempts from workers to a specific subscription task.
Available options: `OpenIfFree` (default), `TakeOver`, `WaitForFree`, or `Concurrent`.
Learn more in [worker strategies](../../../client-api/data-subscriptions/consumption/how-to-consume-data-subscription.mdx#worker-strategies). |
+
+
+
+## Run the subscription worker
+
+After [creating](../../../client-api/data-subscriptions/consumption/api-overview.mdx#create-the-subscription-worker) a subscription worker, the subscription worker is still not processing any documents.
+To initiate processing, you need to define an event handler and attach it to the worker's `batch` event listener.
+
+This handler contains your client-side code responsible for processing the document batches received from the server.
+Whenever a new batch of documents is ready, the provided handler will be triggered.
+
+
+
+{`subscriptionWorker.on("batch", (batch, callback) => \{
+ // Process incoming items:
+ // =======================
+
+ // 'batch':
+ // Contains the documents to be processed.
+
+ // callback():
+ // Needs to be called after processing the batch
+ // to notify the worker that you're done processing.
+\});
+`}
+
+
+
+
+
+## Subscription batch
+
+The subscription batch class contains the following public properties & methods:
+
+| Property | Type | Description |
+|-------------------------------|------------|--------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **items** | `Item[]` | List of items in the batch. See [subscription batch item](../../../client-api/data-subscriptions/consumption/api-overview.mdx#subscription-batch-item). |
+
+| Method | Return type | Description |
+|-------------------------------|-------------|--------------------------------------------------------------------------------------------------------------------------|
+| **getNumberOfItemsInBatch()** | `number` | Get the number of items in the batch. |
+| **getNumberOfIncludes()** | `number` | Get the number of included documents in the batch. |
+| **openSession()** | `object` | Open a new document session that tracks all items and their included items within the current batch. |
+| **openSession(options)** | `object` | Open a new document session - can pass [session options](../../../client-api/session/opening-a-session.mdx#session-options). |
+
+
+
+##### Subscription worker connectivity
+
+As long as there is no exception, the worker will continue addressing the same server that the first batch was received from.
+If the worker fails to reach that node, it will try to [failover](../../../client-api/configuration/load-balance/overview.mdx) to another node from the session's topology list.
+The node that the worker succeeded connecting to, will inform the worker which node is currently responsible for data subscriptions.
+
+
+
+
+
+## Subscription batch item
+
+This class represents a single item in a subscription batch result.
+
+
+
+{`class Item
+\{
+ result;
+ exceptionMessage;
+ id;
+ changeVector;
+ projection;
+ revision;
+ rawResult;
+ rawMetadata;
+ metadata;
+\}
+`}
+
+
+
+| Member | Type | Description |
+|----------------------|-----------|-------------------------------------------------------------------------------------|
+| **result** | `object` | The current batch item. |
+| **exceptionMessage** | `string` | The exception message thrown during current document processing in the server side. |
+| **id** | `string` | The document ID of the underlying document for the current batch item. |
+| **changeVector** | `string` | The change vector of the underlying document for the current batch item. |
+| **rawResult** | `object` | Current batch item - no types reconstructed. |
+| **rawMetadata** | `object` | Current batch item's underlying document metadata. |
+| **metadata** | `object` | Current batch item's underlying metadata values. |
+
+
+
+## Subscription worker
+
+
+
+##### Methods
+
+| Method | Return type | Description |
+|-------------------|---------------|---------------------------------------------------|
+| **dispose()** | `void` | Aborts subscription worker operation. |
+| **on()** | `object` | Method used to set up event listeners & handlers. |
+| **getWorkerId()** | `string` | Get the worker ID. |
+
+
+
+
+
+##### Events
+
+| Event | Listener signature | Description |
+|-----------------------------------|-----------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **"batch"** | `(batch, callback) => void` | Emitted when a batch of documents is sent from the server to the client.
Once processing is done, `callback` *must be called* in order to continue batches' emission. |
+| **"afterAcknowledgment"** | `(batch, callback) => void` | Emitted after each time the server acknowledges the progress of batch processing. |
+| **"connectionRetry"** | `(error) => void` | Emitted when the worker attempts to reconnect to the server after a failure. |
+| **"error"** | `(error) => void` | Emitted on subscription errors. |
+| **"end"** | `(error) => void` | Emitted when subscription is finished. No more batches are going to be emitted. |
+
+
+
+
+
+##### Properties
+
+| Member | Type | Description |
+|----------------------|----------|-----------------------------------------------------------------------|
+| **currentNodeTag** | `string` | The node tag of the current RavenDB server handling the subscription. |
+| **subscriptionName** | `string` | The name of the currently processed subscription. |
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/data-subscriptions/consumption/_api-overview-python.mdx b/versioned_docs/version-7.1/client-api/data-subscriptions/consumption/_api-overview-python.mdx
new file mode 100644
index 0000000000..e91e65fa4f
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/data-subscriptions/consumption/_api-overview-python.mdx
@@ -0,0 +1,207 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* In this page:
+ * [Create the subscription worker](../../../client-api/data-subscriptions/consumption/api-overview.mdx#create-the-subscription-worker)
+ * [`SubscriptionWorkerOptions`](../../../client-api/data-subscriptions/consumption/api-overview.mdx#subscriptionworkeroptions)
+ * [Run the subscription worker](../../../client-api/data-subscriptions/consumption/api-overview.mdx#run-the-subscription-worker)
+ * [`SubscriptionBatch[_T]`](../../../client-api/data-subscriptions/consumption/api-overview.mdx#subscriptionbatch[_t])
+ * [`SubscriptionWorker[_T]`](../../../client-api/data-subscriptions/consumption/api-overview.mdx#subscriptionworker[_t])
+
+
+## Create the subscription worker
+
+Create a subscription worker using `get_subscription_worker` or `get_subscription_worker_by_name`.
+
+* Use the `get_subscription_worker` method to specify the subscription options while creating the worker.
+* Use the `get_subscription_worker_by_name` method to create the worker using the default options.
+
+
+
+{`def get_subscription_worker(
+ self, options: SubscriptionWorkerOptions, object_type: Optional[Type[_T]] = None, database: Optional[str] = None
+) -> SubscriptionWorker[_T]: ...
+
+def get_subscription_worker_by_name(
+ self,
+ subscription_name: Optional[str] = None,
+ object_type: Optional[Type[_T]] = None,
+ database: Optional[str] = None,
+) -> SubscriptionWorker[_T]: ...
+`}
+
+
+
+| Parameter | | |
+|----------------------------------|-----------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **options** | `SubscriptionWorkerOptions` | Options that affect how the worker interacts with the subscription. These options do not alter the definition of the subscription itself. |
+| **object_type** (Optional) | `Type[_T]` | Defines the object type (class) for the items that will be included in the received `SubscriptionBatch` object. |
+| **database** (Optional) | `str` | The name of the database where the subscription task resides. If `None`, the default database configured in DocumentStore will be used. |
+| **subscription_name** (Optional) | `str` | The subscription's name. Used when the worker is generated without creating a `SubscriptionCreationOptions` instance, relying on the default values. |
+
+| Return value | |
+|----------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------|
+| `SubscriptionWorker` | The subscription worker that has been created. Initially, it is idle and will only start processing documents when the `run` function is called. |
+
+
+
+## `SubscriptionWorkerOptions`
+
+When creating a worker with `SubscriptionWorkerOptions`, the only mandatory property is `subscription_name`.
+All other parameters are optional and will default to their respective default values if not specified.
+
+| Member | Type | Description |
+|------------------------------------------|-----------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **subscription_name** | `str` | The name of the subscription to which the worker will connect. |
+| **time_to_wait_before_connection_retry** | `timedelta` | The time to wait before attempting to reconnect after a non-aborting failure during subscription processing. Default: 5 seconds. |
+| **ignore_subscriber_errors** | `bool` | Determines if subscription processing is aborted when the worker's batch-handling code throws an unhandled exception.
`True` – subscription processing will continue.
`False` (Default) – subscription processing will be aborted. |
+| **max_docs_per_batch** | `int` | The maximum number of documents that the server will try to retrieve and send to the client in a batch. If the server doesn't find as many documents as specified, it will send the documents it has found without waiting. Default: 4096. |
+| **close_when_no_docs_left** | `bool` | Determines whether the subscription connection closes when no new documents are available.
`True` – The subscription worker processes all available documents and stops when none remain, at which point the `run` method throws a `SubscriptionClosedException`. Useful for ad-hoc, one-time processing.
`False` (Default) – The subscription worker remains active, waiting for new documents. |
+| **send_buffer_size_in_bytes** | `int` | The size in bytes of the TCP socket buffer used for _sending_ data. Default: 32,768 bytes (32 KiB). |
+| **receive_buffer_size_in_bytes** | `int` | The size in bytes of the TCP socket buffer used for _receiving_ data. Default: 4096 (4 KiB). |
+| **strategy** | `SubscriptionOpeningStrategy` (enum) | Configures how the server handles connection attempts from workers to a specific subscription task. Learn more in [worker strategies](../../../client-api/data-subscriptions/consumption/how-to-consume-data-subscription.mdx#worker-strategies). Default: `OPEN_IF_FREE`. |
+
+
+
+Learn more about `SubscriptionOpeningStrategy` in [worker strategies](../../../client-api/data-subscriptions/consumption/how-to-consume-data-subscription.mdx#worker-strategies).
+
+| `SubscriptionOpeningStrategy` | |
+|---------------------------------|---------------------------------------------------|
+| `OPEN_IF_FREE` | Connect if no other worker is connected |
+| `WAIT_FOR_FREE` | Wait for currently connected worker to disconnect |
+| `TAKE_OVER` | Take over the connection |
+| `CONCURRENT` | Connect concurrently |
+
+
+
+
+
+## Run the subscription worker
+
+After [creating](../../../client-api/data-subscriptions/consumption/api-overview.mdx#create-the-subscription-worker) a subscription worker, the subscription worker is still not processing any documents.
+To start processing, you need to call the `run` function of the [SubscriptionWorker](../../../client-api/data-subscriptions/consumption/api-overview.mdx#subscriptionworker[_t]).
+
+The `run` function receives the client-side code as a function that will process the received document batches.
+
+
+
+{`def run(self, process_documents: Optional[Callable[[SubscriptionBatch[_T]], Any]]) -> Future: ...
+`}
+
+
+
+| Parameter | | |
+|----------------------------------|--------------------------------------------|-----------------------------------|
+| **process_documents** (Optional) | `[Callable[[SubscriptionBatch[_T]], Any]]` | Delegate to sync batch processing |
+
+
+
+
+## `SubscriptionBatch[_T]`
+
+| Member | Type | Description |
+|------------------------------|------------------------------------|------------------------------|
+| **items** | `SubscriptionBatch[_T].Item` array | List of items in the batch |
+| **number_of_items_in_batch** | `int` | Number of items in the batch |
+
+
+
+{`def number_of_items_in_batch(self) -> int:
+ return 0 if self.items is None else len(self.items)
+`}
+
+
+
+
+
+As long as there is no exception, the worker will continue addressing the same
+server that the first batch was received from.
+If the worker fails to reach that node, it will try to
+[failover](../../../client-api/configuration/load-balance/overview.mdx) to another node
+from the session's topology list.
+The node that the worker succeeds connecting to, will inform the worker which
+node is currently responsible for data subscriptions.
+
+
+
+
+{`class Item(Generic[_T_Item]):
+ """
+ Represents a single item in a subscription batch results.
+ This class should be used only inside the subscription's run delegate,
+ using it outside this scope might cause unexpected behavior.
+ """
+`}
+
+
+
+
+{`class SubscriptionBatch(Generic[_T]):
+
+def __init__(self):
+ self._result: Optional[_T_Item] = None
+ self._exception_message: Optional[str] = None
+ self._key: Optional[str] = None
+ self._change_vector: Optional[str] = None
+ self._projection: Optional[bool] = None
+ self._revision: Optional[bool] = None
+ self.raw_result: Optional[Dict] = None
+ self.raw_metadata: Optional[Dict] = None
+ self._metadata: Optional[MetadataAsDictionary] = None
+`}
+
+
+
+| `SubscriptionBatch[_T].item` Member | Type | Description |
+|---------------------------------------|------------------------|---------------------------------------------------------------------------------------|
+| **\_result** (Optional) | `_T_Item` | Current batch item |
+| **\_exception_message** (Optional) | `str` | Message of the exception thrown during current document processing in the server side |
+| **\_key** (Optional) | `str` | Current batch item underlying document ID |
+| **\_change_vector** (Optional) | `str` | Current batch item underlying document change vector of the current document |
+| **\_projection** (Optional) | `bool` | indicates whether the value id a projection |
+| **raw_result** (Optional) | `Dict` | Current batch item before serialization to `T` |
+| **raw_metadata** (Optional) | `Dict` | Current batch item underlying document metadata |
+| **\_metadata** (Optional) | `MetadataAsDictionary` | Current batch item underlying metadata values |
+
+
+Usage of `raw_result`, `raw_metadata`, and `_metadata` values outside of the document processing delegate
+is not supported.
+
+
+
+
+## `SubscriptionWorker[_T]`
+### Methods:
+
+| Method | Return Type | Description |
+|-------------------------------------------------|----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `close(bool wait_for_subscription_task = True)` | `void` | Aborts subscription worker operation ungracefully by waiting for the task returned by the `run` function to finish running. |
+| `run` | `Future[None]` | Call `run` to begin the worker's batch processing. Pass the batch processing delegates to this method (see [above](../../../client-api/data-subscriptions/consumption/api-overview.mdx#run-the-subscription-worker)). |
+### Events:
+
+| Event | Type\Return type | Description |
+|---------------------------|-------------------------------------------|----------------------------------------------------------------------------------|
+| **after\_acknowledgment** | `Callable[[SubscriptionBatch[_T]], None]` | Event invoked after each time the server acknowledges batch processing progress. |
+
+| `after_acknowledgment` Parameters | | |
+|------------------------------------|-------------------------|------------------------------------------|
+| **batch** | `SubscriptionBatch[_T]` | The batch process which was acknowledged |
+
+| Return value | |
+|----------------|--------------------------------------------------------------|
+| `Future[None]` | The worker waits for the task to finish the event processing |
+
+### Properties:
+
+| Member | Type | Description |
+|-----------------------|---------|-----------------------------------------------------------------------|
+| **current_node_tag** | `str` | The node tag of the current RavenDB server handling the subscription. |
+| **subscription_name** | `str` | The name of the currently processed subscription. |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/data-subscriptions/consumption/_category_.json b/versioned_docs/version-7.1/client-api/data-subscriptions/consumption/_category_.json
new file mode 100644
index 0000000000..4f2eeb38b4
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/data-subscriptions/consumption/_category_.json
@@ -0,0 +1,4 @@
+{
+ "position": 2,
+ "label": Consumption,
+}
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/data-subscriptions/consumption/_examples-csharp.mdx b/versioned_docs/version-7.1/client-api/data-subscriptions/consumption/_examples-csharp.mdx
new file mode 100644
index 0000000000..3e4f97e0d8
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/data-subscriptions/consumption/_examples-csharp.mdx
@@ -0,0 +1,450 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* In this page:
+ * [Client with full exception handling and processing retries](../../../client-api/data-subscriptions/consumption/examples.mdx#client-with-full-exception-handling-and-processing-retries)
+ * [Worker with a specified batch size](../../../client-api/data-subscriptions/consumption/examples.mdx#worker-with-a-specified-batch-size)
+ * [Worker that operates with a session](../../../client-api/data-subscriptions/consumption/examples.mdx#worker-that-operates-with-a-session)
+ * [Worker that processes dynamic objects](../../../client-api/data-subscriptions/consumption/examples.mdx#worker-that-processes-dynamic-objects)
+ * [Worker that processes a blittable object](../../../client-api/data-subscriptions/consumption/examples.mdx#worker-that-processes-a-blittable-object)
+ * [Subscription that ends when no documents are left](../../../client-api/data-subscriptions/consumption/examples.mdx#subscription-that-ends-when-no-documents-are-left)
+ * [Subscription that uses included documents](../../../client-api/data-subscriptions/consumption/examples.mdx#subscription-that-uses-included-documents)
+ * [Subscription workers with failover on other nodes](../../../client-api/data-subscriptions/consumption/examples.mdx#subscription-workers-with-failover-on-other-nodes)
+ * [Primary and secondary workers](../../../client-api/data-subscriptions/consumption/examples.mdx#primary-and-secondary-workers)
+
+
+## Client with full exception handling and processing retries
+
+Here we implement a client that handles exceptions thrown by the worker.
+If the exception is recoverable, the client retries creating the worker.
+
+
+
+{`while (true)
+\{
+ // Create the worker:
+ // ==================
+ var options = new SubscriptionWorkerOptions(subscriptionName);
+
+ // Configure the worker:
+ // Allow a downtime of up to 2 hours,
+ // and wait 2 minutes before reconnecting
+ options.MaxErroneousPeriod = TimeSpan.FromHours(2);
+ options.TimeToWaitBeforeConnectionRetry = TimeSpan.FromMinutes(2);
+
+ subscriptionWorker = store.Subscriptions.GetSubscriptionWorker(options);
+
+ try
+ \{
+ // Subscribe to connection retry events
+ // and log any exceptions that occur during processing
+ subscriptionWorker.OnSubscriptionConnectionRetry += exception =>
+ \{
+ Logger.Error("Error during subscription processing: " + subscriptionName,
+ exception);
+ \};
+
+ // Run the worker:
+ // ===============
+ await subscriptionWorker.Run(batch =>
+ \{
+ foreach (var item in batch.Items)
+ \{
+ // Forcefully stop subscription processing if the ID is "companies/2-A"
+ // and throw an exception to let external logic handle the specific case
+ if (item.Result.Company == "companies/2-A")
+ \{
+ // The custom exception thrown from here
+ // will be wrapped by \`SubscriberErrorException\`
+ throw new UnsupportedCompanyException(
+ "Company ID can't be 'companies/2-A', pleases fix");
+ \}
+
+ // Process the order document - provide your own logic
+ ProcessOrder(item.Result);
+ \}
+ \}, cancellationToken);
+
+ // The Run method will stop if the subscription worker is disposed,
+ // exiting the while loop
+ return;
+ \}
+ catch (Exception e)
+ \{
+ Logger.Error("Failure in subscription: " + subscriptionName, e);
+
+ // The following exceptions are Not recoverable
+ if (e is DatabaseDoesNotExistException ||
+ e is SubscriptionDoesNotExistException ||
+ e is SubscriptionInvalidStateException ||
+ e is AuthorizationException)
+ throw;
+
+
+ if (e is SubscriptionClosedException)
+ // Subscription probably closed explicitly by admin
+ return;
+
+ if (e is SubscriberErrorException se)
+ \{
+ // For UnsupportedCompanyException we want to throw an exception,
+ // otherwise, continue processing
+ if (se.InnerException != null && se.InnerException is UnsupportedCompanyException)
+ \{
+ throw;
+ \}
+
+ // Call continue to skip the current while(true) iteration and try reconnecting
+ // in the next one, allowing the worker to process future batches.
+ continue;
+ \}
+
+ // Handle this depending on the subscription opening strategy
+ if (e is SubscriptionInUseException)
+ continue;
+
+ // Call return to exit the while(true) loop,
+ // dispose the worker (via finally), and stop the subscription.
+ return;
+ \}
+ finally
+ \{
+ subscriptionWorker.Dispose();
+ \}
+\}
+`}
+
+
+
+
+
+## Worker with a specified batch size
+
+Here we create a worker and specify the maximum number of documents the server will send to the worker in each batch.
+
+
+
+{`var workerWBatch = store.Subscriptions.GetSubscriptionWorker(
+ new SubscriptionWorkerOptions(subscriptionName)
+ \{
+ MaxDocsPerBatch = 20
+ \});
+
+_ = workerWBatch.Run(x =>
+\{
+ // your custom logic
+\});
+`}
+
+
+
+
+
+## Worker that operates with a session
+
+Here we create a subscription that sends _Order_ documents that do not have a shipping date.
+The worker receiving these documents will update the `ShippedAt` field value and save the document back to the server via the session.
+
+
+Note:
+The session is opened with `batch.OpenSession` instead of with `Store.OpenSession`.
+
+
+
+
+{`// Create the subscription task on the server:
+// ===========================================
+
+var subscriptionName = store.Subscriptions.Create(new SubscriptionCreationOptions()
+\{
+ Query = @"from Orders as o where o.ShippedAt = null"
+\});
+
+// Create the subscription worker that will consume the documents:
+// ===============================================================
+
+var subscriptionWorker = store.Subscriptions.GetSubscriptionWorker(subscriptionName);
+_ = subscriptionWorker.Run(batch =>
+\{
+ // Open a session with 'batch.OpenSession'
+ using (var session = batch.OpenSession())
+ \{
+ foreach (var order in batch.Items.Select(x => x.Result))
+ \{
+ TransferOrderToShipmentCompany(order); // call your custom method
+ order.ShippedAt = DateTime.UtcNow; // update the document field
+ \}
+
+ // Save the updated Order documents
+ session.SaveChanges();
+ \}
+\});
+`}
+
+
+
+
+
+## Worker that processes dynamic objects
+
+Here we define a subscription that projects the _Order_ documents into a dynamic format.
+The worker processes the dynamic objects it receives.
+
+
+
+{`// Create the subscription task on the server:
+// ===========================================
+
+var subscriptionName = "My dynamic subscription";
+store.Subscriptions.Create(new SubscriptionCreationOptions()
+\{
+ Name = subscriptionName,
+ Projection = order =>
+ new \{ DynanamicField_1 = "Company: " + order.Company + " Employee: " + order.Employee \}
+\});
+
+// Create the subscription worker that will consume the documents:
+// ===============================================================
+
+var subscriptionWorker = store.Subscriptions.GetSubscriptionWorker(subscriptionName);
+_ = subscriptionWorker.Run(batch =>
+\{
+ foreach (var item in batch.Items)
+ \{
+ // Access the dynamic field in the document
+ dynamic field = item.Result.DynanamicField_1;
+
+ // Call your custom method
+ ProcessItem(field);
+ \}
+\});
+`}
+
+
+
+
+
+## Worker that processes a blittable object
+
+Create a worker that processes documents as low level blittable objects.
+This can be useful in extreme high-performance scenarios, but may be dangerous due to the direct usage of unmanaged memory.
+
+
+
+{`// Create the subscription task on the server:
+// ===========================================
+
+var subscriptionName = store.Subscriptions.Create(new SubscriptionCreationOptions
+\{
+ Projection = x => new
+ \{
+ x.Employee
+ \}
+\});
+
+// Create the subscription worker that will consume the documents:
+// ===============================================================
+
+var subscriptionWorker =
+ // Specify \`BlittableJsonReaderObject\` as the generic type parameter
+ store.Subscriptions.GetSubscriptionWorker(subscriptionName);
+
+_ = subscriptionWorker.Run(batch =>
+\{
+ foreach (var item in batch.Items)
+ \{
+ // Access the Employee field within the blittable object
+ var employeeField = item.Result["Employee"].ToString();
+
+ ProcessItem(employeeField); // call your custom method
+ \}
+\});
+`}
+
+
+
+
+
+## Subscription that ends when no documents are left
+
+Here we create a subscription client that runs until there are no more new documents to process.
+This is useful for ad-hoc, single-use processing where the user needs to ensure that all documents are fully processed.
+
+
+
+{`// Create the subscription task on the server:
+// ===========================================
+var subscriptionName = store.Subscriptions.Create(
+ new SubscriptionCreationOptions
+ \{
+ Filter = order => order.Lines.Sum(line => line.PricePerUnit * line.Quantity) > 10000,
+ Projection = order => new OrderAndCompany
+ \{
+ OrderId = order.Id,
+ Company = RavenQuery.Load(order.Company)
+ \}
+ \});
+
+// Create the subscription worker that will consume the documents:
+// ===============================================================
+var highValueOrdersWorker = store.Subscriptions.GetSubscriptionWorker(
+ new SubscriptionWorkerOptions(subscriptionName)
+ \{
+ // Here we set the worker to stop when there are no more documents left to send
+ // Will throw SubscriptionClosedException when it finishes it's job
+ CloseWhenNoDocsLeft = true
+ \});
+
+try
+\{
+ await highValueOrdersWorker.Run(batch =>
+ \{
+ foreach (var item in batch.Items)
+ \{
+ SendThankYouNoteToEmployee(item.Result); // call your custom method
+ \}
+ \});
+\}
+catch (SubscriptionClosedException)
+\{
+ // That's expected, no more documents to process
+\}
+`}
+
+
+
+
+
+## Subscription that uses included documents
+
+Here we create a subscription that, in addition to sending all the _Order_ documents to the worker,
+will include all the referenced _Product_ documents in the batch sent to the worker.
+
+When the worker accesses these _Product_ documents, no additional requests will be made to the server.
+
+
+
+{`// Create the subscription task on the server:
+// ===========================================
+
+var subscriptionName = store.Subscriptions.Create(new SubscriptionCreationOptions()
+\{
+ // Include the referenced Product documents for each Order document
+ Query = @"from Orders include Lines[].Product"
+\});
+
+// Create the subscription worker that will consume the documents:
+// ===============================================================
+
+var subscriptionWorker = store.Subscriptions.GetSubscriptionWorker(subscriptionName);
+_ = subscriptionWorker.Run(batch =>
+\{
+ // Open a session via 'batch.OpenSession'
+ // in order to access the Product documents
+ using (var session = batch.OpenSession())
+ \{
+ foreach (var order in batch.Items.Select(x => x.Result))
+ \{
+ foreach (var orderLine in order.Lines)
+ \{
+ // Calling Load will Not generate a request to the server,
+ // because orderLine.Product was included in the batch
+ var product = session.Load(orderLine.Product);
+
+ ProcessOrderAndProduct(order, product); // call your custom method
+ \}
+ \}
+ \}
+\});
+`}
+
+
+
+
+
+## Subscription workers with failover on other nodes
+
+In this configuration, any available node will create a worker.
+If the worker fails, another available node will take over.
+
+
+
+{`var worker = store.Subscriptions.GetSubscriptionWorker(
+ new SubscriptionWorkerOptions(subscriptionName)
+\{
+ Strategy = SubscriptionOpeningStrategy.WaitForFree
+\});
+`}
+
+
+
+
+
+## Primary and secondary workers
+
+Here we create two workers:
+
+* The primary worker, with a `TakeOver` strategy, will take over the other worker and establish the connection.
+* The secondary worker, with a `WaitForFree` strategy, will wait for the first worker to fail (due to machine failure, etc.).
+
+The primary worker:
+
+
+{`var primaryWorker = store.Subscriptions.GetSubscriptionWorker(
+ new SubscriptionWorkerOptions(subscriptionName)
+\{
+ Strategy = SubscriptionOpeningStrategy.TakeOver
+\});
+
+while (true)
+\{
+ try
+ \{
+ await primaryWorker.Run(x =>
+ \{
+ // your logic
+ \});
+ \}
+ catch (Exception)
+ \{
+ // retry
+ \}
+\}
+`}
+
+
+
+The secondary worker:
+
+
+{`var secondaryWorker = store.Subscriptions.GetSubscriptionWorker(
+ new SubscriptionWorkerOptions(subscriptionName)
+\{
+ Strategy = SubscriptionOpeningStrategy.WaitForFree
+\});
+
+while (true)
+\{
+ try
+ \{
+ await secondaryWorker.Run(x =>
+ \{
+ // your logic
+ \});
+ \}
+ catch (Exception)
+ \{
+ // retry
+ \}
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/data-subscriptions/consumption/_examples-java.mdx b/versioned_docs/version-7.1/client-api/data-subscriptions/consumption/_examples-java.mdx
new file mode 100644
index 0000000000..295611c535
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/data-subscriptions/consumption/_examples-java.mdx
@@ -0,0 +1,294 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* this page:
+ * [Worker with a specified batch size](../../../client-api/data-subscriptions/consumption/examples.mdx#worker-with-a-specified-batch-size)
+ * [Client with full exception handling and processing retries](../../../client-api/data-subscriptions/consumption/examples.mdx#client-with-full-exception-handling-and-processing-retries)
+ * [Subscription that ends when no documents left](../../../client-api/data-subscriptions/consumption/examples.mdx#subscription-that-ends-when-no-documents-left)
+ * [Worker that processes raw objects](../../../client-api/data-subscriptions/consumption/examples.mdx#sworker-that-processes-raw-objects)
+ * [Worker that operates with a session](../../../client-api/data-subscriptions/consumption/examples.mdx#worker-that-operates-with-a-session)
+ * [Subscription that uses included documents](../../../client-api/data-subscriptions/consumption/examples.mdx#subscription-that-uses-included-documents)
+ * [Primary and secondary workers](../../../client-api/data-subscriptions/consumption/examples.mdx#primary-and-secondary-workers)
+
+
+## Worker with a specified batch size
+
+Here we create a worker and specify the maximum number of documents the server will send to the worker in each batch.
+
+
+
+{`SubscriptionWorkerOptions options = new SubscriptionWorkerOptions(subscriptionName);
+options.setMaxDocsPerBatch(20);
+SubscriptionWorker workerWBatch = store.subscriptions().getSubscriptionWorker(Order.class, options);
+workerWBatch.run(x -> \{ /* custom logic */\});
+`}
+
+
+
+
+
+## Client with full exception handling and processing retries
+
+Here we implement a client that handles exceptions thrown by a worker.
+If the exception is recoverable, the client retries creating the worker.
+
+
+
+{`while (true) \{
+ SubscriptionWorkerOptions options = new SubscriptionWorkerOptions(subscriptionName);
+ // here we configure that we allow a down time of up to 2 hours,
+ // and will wait for 2 minutes for reconnecting
+
+ options.setMaxErroneousPeriod(Duration.ofHours(2));
+ options.setTimeToWaitBeforeConnectionRetry(Duration.ofMinutes(2));
+
+ subscriptionWorker = store.subscriptions().getSubscriptionWorker(Order.class, options);
+
+ try \{
+ // here we are able to be informed of any exception that happens during processing
+ subscriptionWorker.addOnSubscriptionConnectionRetry(exception -> \{
+ logger.error("Error during subscription processing: " + subscriptionName, exception);
+ \});
+
+ subscriptionWorker.run(batch -> \{
+ for (SubscriptionBatch.Item item : batch.getItems()) \{
+ // we want to force close the subscription processing in that case
+ // and let the external code decide what to do with that
+ if ("Europe".equalsIgnoreCase(item.getResult().getShipVia())) \{
+ throw new IllegalStateException("We cannot ship via Europe");
+ \}
+ processOrder(item.getResult());
+ \}
+ \}).get();
+
+
+ // Run will complete normally if you have disposed the subscription
+ return;
+ \} catch (Exception e) \{
+ logger.error("Failure in subscription: " + subscriptionName, e);
+
+ e = ExceptionsUtils.unwrapException(e);
+ if (e instanceof DatabaseDoesNotExistException ||
+ e instanceof SubscriptionDoesNotExistException ||
+ e instanceof SubscriptionInvalidStateException ||
+ e instanceof AuthorizationException) \{
+ throw e; // not recoverable
+ \}
+
+ if (e instanceof SubscriptionClosedException) \{
+ // closed explicitly by admin, probably
+ return;
+ \}
+
+ if (e instanceof SubscriberErrorException) \{
+ SubscriberErrorException se = (SubscriberErrorException) e;
+ // for IllegalStateException type, we want to throw an exception, otherwise
+ // we continue processing
+ if (se.getCause() != null && se.getCause() instanceof IllegalStateException) \{
+ throw e;
+ \}
+
+ continue;
+ \}
+
+ // handle this depending on subscription
+ // open strategy (discussed later)
+ if (e instanceof SubscriptionInUseException) \{
+ continue;
+ \}
+
+ return;
+ \} finally \{
+ subscriptionWorker.close();
+ \}
+\}
+`}
+
+
+
+
+
+## Subscription that ends when no documents left
+
+Here we create a subscription client that runs only up to the point there are no more new documents left to process.
+
+This is useful for ad-hoc, single-use processing where the user needs to ensure that all documents are fully processed.
+
+
+
+{`SubscriptionWorkerOptions options = new SubscriptionWorkerOptions(subsId);
+
+// Here we ask the worker to stop when there are no documents left to send.
+// Will throw SubscriptionClosedException when it finishes it's job
+options.setCloseWhenNoDocsLeft(true);
+SubscriptionWorker highValueOrdersWorker = store
+ .subscriptions().getSubscriptionWorker(OrderAndCompany.class, options);
+
+try \{
+ highValueOrdersWorker.run(batch -> \{
+ for (SubscriptionBatch.Item item : batch.getItems()) \{
+ sendThankYouNoteToEmployee(item.getResult());
+ \}
+ \});
+\} catch (SubscriptionClosedException e) \{
+ //that's expected
+\}
+`}
+
+
+
+
+
+## Worker that processes raw objects
+
+Here we create a worker that processes received data as ObjectNode objects.
+
+
+
+{`String subscriptionName = "My dynamic subscription";
+
+SubscriptionCreationOptions subscriptionCreationOptions = new SubscriptionCreationOptions();
+subscriptionCreationOptions.setName("My dynamic subscription");
+subscriptionCreationOptions.setQuery("from Orders as o \\n" +
+ "select \{ \\n" +
+ " DynamicField_1: 'Company:' + o.Company + ' Employee: ' + o.Employee \\n" +
+ "\}");
+
+SubscriptionWorker worker = store.subscriptions().getSubscriptionWorker(subscriptionName);
+worker.run(x -> \{
+ for (SubscriptionBatch.Item item : x.getItems()) \{
+ ObjectNode result = item.getResult();
+ raiseNotification(result.get("DynamicField_1"));
+ \}
+\});
+`}
+
+
+
+
+
+## Worker that operates with a session
+
+Here we create a subscription that sends Order documents that do not have a shipping date.
+The worker receiving these documents will update the `ShippedAt` field value and save the document back to the server via the session.
+
+
+
+{`SubscriptionCreationOptions subscriptionCreationOptions = new SubscriptionCreationOptions();
+subscriptionCreationOptions.setQuery("from Orders as o where o.ShippedAt = null");
+String subscriptionName = store.subscriptions().create(subscriptionCreationOptions);
+
+SubscriptionWorker subscriptionWorker = store.subscriptions().getSubscriptionWorker(Order.class, subscriptionName);
+
+subscriptionWorker.run(batch -> \{
+ try (IDocumentSession session = batch.openSession()) \{
+ for (SubscriptionBatch.Item orderItem : batch.getItems()) \{
+ transferOrderToShipmentCompany(orderItem.getResult());
+ orderItem.getResult().setShippedAt(new Date());
+ \}
+
+ // we know that we have at least one order to ship,
+ // because the subscription query above has that in it's WHERE clause
+ session.saveChanges();
+ \}
+\});
+`}
+
+
+
+
+
+## Subscription that uses included documents
+
+Here we create a subscription that, in addition to sending all the _Order_ documents to the worker,
+will include all the referenced _Product_ documents in the batch sent to the worker.
+
+When the worker accesses these _Product_ documents, no additional requests will be made to the server.
+
+
+
+{`SubscriptionCreationOptions subscriptionCreationOptions = new SubscriptionCreationOptions();
+subscriptionCreationOptions.setQuery("from Orders include Lines[].Product");
+
+
+String subscriptionName = store.subscriptions().create(subscriptionCreationOptions);
+
+SubscriptionWorker subscriptionWorker = store.subscriptions().getSubscriptionWorker(Order.class, subscriptionName);
+
+subscriptionWorker.run(batch -> \{
+ try (IDocumentSession session = batch.openSession()) \{
+ for (SubscriptionBatch.Item orderItem : batch.getItems()) \{
+ Order order = orderItem.getResult();
+ for (OrderLine orderLine : order.getLines()) \{
+ // this line won't generate a request, because orderLine.Product was included
+ Product product = session.load(Product.class, orderLine.getProduct());
+ raiseProductNotification(order, product);
+ \}
+ \}
+ \}
+\});
+`}
+
+
+
+
+
+## Primary and secondary workers
+
+Here we create two workers:
+
+* The primary worker, with a `TAKE_OVER` strategy, will take over the other worker and establish the connection.
+* The secondary worker, with a `WAIT_FOR_FREE` strategy, will wait for the first worker to fail (due to machine failure, etc.).
+
+The primary worker:
+
+
+
+{`SubscriptionWorkerOptions options1 = new SubscriptionWorkerOptions(subscriptionName);
+options1.setStrategy(SubscriptionOpeningStrategy.TAKE_OVER);
+SubscriptionWorker worker1 = store.subscriptions().getSubscriptionWorker(Order.class, options1);
+
+
+while (true) \{
+ try \{
+ worker1
+ .run(x -> \{
+ // your logic
+ \});
+ \} catch (Exception e) \{
+ // retry
+ \}
+\}
+`}
+
+
+
+The secondary worker:
+
+
+
+{`SubscriptionWorkerOptions options2 = new SubscriptionWorkerOptions(subscriptionName);
+options2.setStrategy(SubscriptionOpeningStrategy.WAIT_FOR_FREE);
+SubscriptionWorker worker2 = store.subscriptions().getSubscriptionWorker(Order.class, options2);
+
+while (true) \{
+ try \{
+ worker2
+ .run(x -> \{
+ // your logic
+ \});
+ \} catch (Exception e) \{
+ // retry
+ \}
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/data-subscriptions/consumption/_examples-nodejs.mdx b/versioned_docs/version-7.1/client-api/data-subscriptions/consumption/_examples-nodejs.mdx
new file mode 100644
index 0000000000..e1a57b8406
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/data-subscriptions/consumption/_examples-nodejs.mdx
@@ -0,0 +1,456 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* In this page:
+ * [Client with full exception handling and processing retries](../../../client-api/data-subscriptions/consumption/examples.mdx#client-with-full-exception-handling-and-processing-retries)
+ * [Worker with a specified batch size](../../../client-api/data-subscriptions/consumption/examples.mdx#worker-with-a-specified-batch-size)
+ * [Worker that operates with a session](../../../client-api/data-subscriptions/consumption/examples.mdx#worker-that-operates-with-a-session)
+ * [Worker that processes dynamic objects](../../../client-api/data-subscriptions/consumption/examples.mdx#worker-that-processes-dynamic-objects)
+ * [Subscription that ends when no documents are left](../../../client-api/data-subscriptions/consumption/examples.mdx#subscription-that-ends-when-no-documents-are-left)
+ * [Subscription that uses included documents](../../../client-api/data-subscriptions/consumption/examples.mdx#subscription-that-uses-included-documents)
+ * [Primary and secondary workers](../../../client-api/data-subscriptions/consumption/examples.mdx#primary-and-secondary-workers)
+
+
+## Client with full exception handling and processing retries
+
+Here we implement a client that handles exceptions thrown by the worker.
+If the exception is recoverable, the client retries creating the worker.
+
+
+
+{`// Create the subscription task on the server:
+// ===========================================
+
+const subscriptionName = await documentStore.subscriptions.create(\{
+ name: "ProcessOrdersWithLowFreight",
+ query: "from Orders where Freight < 0.5"
+\});
+
+// Create the subscription worker that will consume the documents:
+// ===============================================================
+
+await setupReconnectingWorker(subscriptionName);
+
+async function setupReconnectingWorker(subscriptionName) \{
+ let subscriptionWorker;
+
+ await reconnect();
+
+ function closeWorker(worker) \{
+ worker.dispose();
+ \}
+
+ async function reconnect() \{
+ if (subscriptionWorker) \{
+ closeWorker(subscriptionWorker);
+ \}
+
+ // Configure the worker:
+ const subscriptionWorkerOptions = \{
+ subscriptionName: subscriptionName,
+ // Allow a downtime of up to 2 hours
+ maxErroneousPeriod: 2 * 3600 * 1000,
+ // Wait 2 minutes before reconnecting
+ timeToWaitBeforeConnectionRetry: 2 * 60 * 1000
+ \};
+
+ subscriptionWorker =
+ store.subscriptions.getSubscriptionWorker(subscriptionWorkerOptions);
+
+ // Subscribe to connection retry events,
+ // and log any exceptions that occur during processing
+ subscriptionWorker.on("connectionRetry", error => \{
+ console.error(
+ "Error during subscription processing: " + subscriptionName, error);
+ \});
+
+ // Run the worker:
+ // ===============
+ subscriptionWorker.on("batch", (batch, callback) => \{
+ try \{
+ for (const item of batch.items) \{
+ const orderDocument = item.result;
+
+ // Forcefully stop subscription processing if the ID is "companies/46-A"
+ // and throw an exception to let external logic handle the specific case
+ if (orderDocument.Company && orderDocument.Company === "companies/46-A") \{
+ // 'The InvalidOperationException' thrown from here
+ // will be wrapped by \`SubscriberErrorException\`
+ callback(new InvalidOperationException(
+ "Company ID can't be 'companies/46-A', pleases fix"));
+ return;
+ \}
+
+ // Process the order document - provide your own logic
+ processOrder(orderDocument);
+ \}
+ // Call 'callback' once you're done
+ // The worker will send an acknowledgement to the server,
+ // so that server can send next batch
+ callback();
+ \}
+ catch(err) \{
+ callback(err);
+ \}
+ \});
+
+ // Handle errors:
+ // ==============
+ subscriptionWorker.on("error", error => \{
+ console.error("Failure in subscription: " + subscriptionName, error);
+
+ // The following exceptions are Not recoverable
+ if (error.name === "DatabaseDoesNotExistException" ||
+ error.name === "SubscriptionDoesNotExistException" ||
+ error.name === "SubscriptionInvalidStateException" ||
+ error.name === "AuthorizationException") \{
+ throw error;
+ \}
+
+ if (error.name === "SubscriptionClosedException") \{
+ // Subscription probably closed explicitly by admin
+ return closeWorker(subscriptionWorker);
+ \}
+
+ if (error.name === "SubscriberErrorException") \{
+ // For the InvalidOperationException we want to throw an exception,
+ // otherwise, continue processing
+ if (error.cause && error.cause.name === "InvalidOperationException") \{
+ throw error;
+ \}
+
+ setTimeout(reconnect, 1000);
+ return;
+ \}
+
+ // Handle this depending on the subscription opening strategy
+ if (error.name === "SubscriptionInUseException") \{
+ setTimeout(reconnect, 1000);
+ return;
+ \}
+
+ setTimeout(reconnect, 1000);
+ return;
+ \});
+
+ // Handle worker end event:
+ // ========================
+ subscriptionWorker.on("end", () => \{
+ closeWorker(subscriptionWorker);
+ \});
+ \}
+\}
+`}
+
+
+
+
+
+## Worker with a specified batch size
+
+Here we create a worker and specify the maximum number of documents the server will send to the worker in each batch.
+
+
+
+{`// Create the subscription task on the server:
+// ===========================================
+
+const subscriptionName = await documentStore.subscriptions.create(\{
+ name: "ProcessOrders",
+ query: "from Orders"
+\});
+
+// Create the subscription worker that will consume the documents:
+// ===============================================================
+
+const workerOptions = \{
+ subscriptionName: subscriptionName,
+ maxDocsPerBatch: 20 // Set the maximum number of documents per batch
+\};
+
+const worker = documentStore.subscriptions.getSubscriptionWorker(workerOptions);
+
+worker.on("batch", (batch, callback) => \{
+ try \{
+ // Add your logic for processing the incoming batch items here...
+
+ // Call 'callback' once you're done
+ // The worker will send an acknowledgement to the server,
+ // so that server can send next batch
+ callback();
+
+ \} catch(err) \{
+ callback(err);
+ \}
+\});
+`}
+
+
+
+
+
+## Worker that operates with a session
+
+Here we create a subscription that sends _Order_ documents that do not have a shipping date.
+The worker receiving these documents will update the `ShippedAt` field value and save the document back to the server via the session.
+
+
+Note:
+The session is opened with `batch.openSession` instead of with `documentStore.openSession`.
+
+
+
+
+{`// Create the subscription task on the server:
+// ===========================================
+
+const subscriptionName = await documentStore.subscriptions.create(\{
+ name: "ProcessOrdersThatWereNotShipped",
+ query: "from Orders as o where o.ShippedAt = null"
+\});
+
+// Create the subscription worker that will consume the documents:
+// ===============================================================
+
+const workerOptions = \{ subscriptionName \};
+const worker = documentStore.subscriptions.getSubscriptionWorker(workerOptions);
+
+worker.on("batch", async (batch, callback)
+ try \{
+ // Open a session with 'batch.openSession'
+ const session = batch.openSession();
+
+ for (const item of batch.items) \{
+ orderDocument = item.result;
+
+ transferOrderToShipmentCompany(orderDocument); // call your custom method
+ orderDocument.ShippedAt = new Date(); // update the document field
+ \}
+
+ // Save the updated Order documents
+ await session.saveChanges();
+ callback();
+
+ \} catch(err) \{
+ callback(err);
+ \}
+\});
+`}
+
+
+
+
+
+## Worker that processes dynamic objects
+
+Here we define a subscription that projects the _Order_ documents into a dynamic format.
+The worker processes the dynamic objects it receives.
+
+
+
+{`// Create the subscription task on the server:
+// ===========================================
+
+const subscriptionName = await documentStore.subscriptions.create(\{
+ name: "ProcessDynamicFields",
+ query: \`From Orders as o
+ Select \{
+ dynamicField: "Company: " + o.Company + " Employee: " + o.Employee,
+ \}\`
+\});
+
+// Create the subscription worker that will consume the documents:
+// ===============================================================
+
+const workerOptions = \{ subscriptionName \};
+const worker = documentStore.subscriptions.getSubscriptionWorker(workerOptions);
+
+worker.on("batch", (batch, callback) => \{
+ for (const item of batch.items) \{
+
+ // Access the dynamic field in the document
+ const field = item.result.dynamicField;
+
+ // Call your custom method
+ processItem(field);
+ \}
+
+ callback();
+\});
+`}
+
+
+
+
+
+## Subscription that ends when no documents are left
+
+Here we create a subscription client that runs until there are no more new documents to process.
+This is useful for ad-hoc, single-use processing where the user needs to ensure that all documents are fully processed.
+
+
+
+{`// Create the subscription task on the server:
+// ===========================================
+
+// Define the filtering criteria
+const query = \`
+ declare function getOrderLinesSum(doc) \{
+ var sum = 0;
+ for (var i in doc.Lines) \{
+ sum += doc.Lines[i].PricePerUnit * doc.Lines[i].Quantity;
+ \}
+ return sum;
+ \}
+
+ from Orders as o
+ where getOrderLinesSum(o) > 10_000\`;
+
+// Create the subscription with the defined query
+const subscriptionName = await documentStore.subscriptions.create(\{ query \});
+
+// Create the subscription worker that will consume the documents:
+// ===============================================================
+
+const workerOptions = \{
+ subscriptionName: subscriptionName,
+ // Here we set the worker to stop when there are no more documents left to send
+ // Will throw SubscriptionClosedException when it finishes it's job
+ closeWhenNoDocsLeft: true
+\};
+
+const highValueOrdersWorker =
+ documentStore.subscriptions.getSubscriptionWorker(workerOptions);
+
+highValueOrdersWorker.on("batch", (batch, callback) => \{
+ for (const item of batch.items) \{
+ sendThankYouNoteToEmployee(item.result); // call your custom method
+ \}
+
+ callback();
+\});
+
+highValueOrdersWorker.on("error", err => \{
+ if (err.name === "SubscriptionClosedException") \{
+ // That's expected, no more documents to process
+ \}
+\});
+`}
+
+
+
+
+
+## Subscription that uses included documents
+
+Here we create a subscription that, in addition to sending all the _Order_ documents to the worker,
+will include all the referenced _Product_ documents in the batch sent to the worker.
+
+When the worker accesses these _Product_ documents, no additional requests will be made to the server.
+
+
+
+{`// Create the subscription task on the server:
+// ===========================================
+
+const subscriptionName = await documentStore.subscriptions.create(\{
+ name: "ProcessIncludedDocuments",
+ query: \`from Orders include Lines[].Product\`
+\});
+
+// Create the subscription worker that will consume the documents:
+// ===============================================================
+
+const workerOptions = \{ subscriptionName \};
+const worker = documentStore.subscriptions.getSubscriptionWorker(workerOptions);
+
+worker.on("batch", async (batch, callback) => \{
+ // Open a session via 'batch.openSession'
+ // in order to access the Product documents
+ const session = batch.openSession();
+
+ for (const item of batch.items) \{
+ const orderDocument = item.result;
+
+ for (const orderLine of orderDocument.Lines)
+ \{
+ // Calling 'load' will Not generate a request to the server,
+ // because orderLine.Product was included in the batch
+ const product = await session.load(orderLine.Product);
+ const productName = product.Name;
+
+ // Call your custom method
+ processOrderAndProduct(order, product);
+ \}
+ \}
+
+ callback();
+\});
+`}
+
+
+
+
+
+## Primary and secondary workers
+
+Here we create two workers:
+
+* The primary worker, with a `TakeOver` strategy, will take over the other worker and establish the connection.
+* The secondary worker, with a `WaitForFree` strategy, will wait for the first worker to fail (due to machine failure, etc.).
+
+The primary worker:
+
+
+
+{`const workerOptions1 = \{
+ subscriptionName,
+ strategy: "TakeOver",
+ documentType: Order
+\};
+
+const worker1 = documentStore.subscriptions.getSubscriptionWorker(workerOptions1);
+
+worker1.on("batch", (batch, callback) => \{
+ // your logic
+ callback();
+\});
+
+worker1.on("error", err => \{
+ // retry
+\});
+`}
+
+
+
+The secondary worker:
+
+
+
+{`const workerOptions2 = \{
+ subscriptionName,
+ strategy: "WaitForFree",
+ documentType: Order
+\};
+
+const worker2 = documentStore.subscriptions.getSubscriptionWorker(workerOptions2);
+
+worker2.on("batch", (batch, callback) => \{
+ // your logic
+ callback();
+\});
+
+worker2.on("error", err => \{
+ // retry
+\});
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/data-subscriptions/consumption/_examples-python.mdx b/versioned_docs/version-7.1/client-api/data-subscriptions/consumption/_examples-python.mdx
new file mode 100644
index 0000000000..37569345e2
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/data-subscriptions/consumption/_examples-python.mdx
@@ -0,0 +1,314 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* In this page:
+ * [Client with full exception handling and processing retries](../../../client-api/data-subscriptions/consumption/examples.mdx#client-with-full-exception-handling-and-processing-retries)
+ * [Worker with a specified batch size](../../../client-api/data-subscriptions/consumption/examples.mdx#worker-with-a-specified-batch-size)
+ * [Worker that operates with a session](../../../client-api/data-subscriptions/consumption/examples.mdx#worker-that-operates-with-a-session)
+ * [Worker that processes dynamic objects](../../../client-api/data-subscriptions/consumption/examples.mdx#worker-that-processes-dynamic-objects)
+ * [Subscription that ends when no documents are left](../../../client-api/data-subscriptions/consumption/examples.mdx#subscription-that-ends-when-no-documents-are-left)
+ * [Subscription that uses included documents](../../../client-api/data-subscriptions/consumption/examples.mdx#subscription-that-uses-included-documents)
+ * [Subscription workers with failover on other nodes](../../../client-api/data-subscriptions/consumption/examples.mdx#subscription-workers-with-failover-on-other-nodes)
+ * [Primary and secondary workers](../../../client-api/data-subscriptions/consumption/examples.mdx#primary-and-secondary-workers)
+
+
+## Client with full exception handling and processing retries
+
+Here we implement a client that handles exceptions thrown by a worker.
+If the exception is recoverable, the client retries creating the worker.
+
+
+
+{`while True:
+ options = SubscriptionWorkerOptions(subscription_name)
+
+ # here we configure that we allow a down time of up to 2 hours, and will wait for 2 minutes for reconnecting
+ options.max_erroneous_period = timedelta(hours=2)
+ options.time_to_wait_before_connection_retry = timedelta(minutes=2)
+
+ subscription_worker = store.subscriptions.get_subscription_worker(options, Order)
+
+ try:
+ # here we are able to be informed of any exceptions that happens during processing
+ subscription_worker.add_on_subscription_connection_retry(
+ lambda exception: logger.error(
+ f"Error during subscription processing: \{subscription_name\}", exc_info=exception
+ )
+ )
+
+ def _process_documents_callback(batch: SubscriptionBatch[Order]):
+ for item in batch.items:
+ # we want to force close the subscription processing in that case
+ # and let the external code decide what to do with that
+ if item.result.company == "companies/2-A":
+ raise UnsupportedCompanyException(
+ "Company Id can't be 'companies/2-A', you must fix this"
+ )
+ process_order(item.result)
+
+ # Run will complete normally if you have disposed the subscription
+ return
+
+ # Pass the callback to worker.run()
+ subscription_worker.run(_process_documents_callback)
+
+ except Exception as e:
+ logger.error(f"Failure in subscription: \{subscription_name\}", exc_info=e)
+ exception_type = type(e)
+ if (
+ exception_type is DatabaseDoesNotExistException
+ or exception_type is SubscriptionDoesNotExistException
+ or exception_type is SubscriptionInvalidStateException
+ or exception_type is AuthorizationException
+ ):
+ raise # not recoverable
+
+ if exception_type is SubscriptionClosedException:
+ # closed explicitely by admin, probably
+ return
+
+ if exception_type is SubscriberErrorException:
+ # for UnsupportedCompanyException type, we want to throw an exception, otherwise
+ # we continue processing
+ if e.args[1] is not None and type(e.args[1]) is UnsupportedCompanyException:
+ raise
+
+ continue
+
+ # handle this depending on subscription
+ # open strategy (discussed later)
+ if e is SubscriptionInUseException:
+ continue
+
+ return
+ finally:
+ subscription_worker.close(False)
+`}
+
+
+
+
+
+## Worker with a specified batch size
+
+Here we create a worker and specify the maximum number of documents the server will send to the worker in each batch.
+
+
+
+{`worker_w_batch = store.subscriptions.get_subscription_worker(
+ SubscriptionWorkerOptions(subscription_name, max_docs_per_batch=20), Order
+)
+
+_ = worker_w_batch.run(
+ process_documents=lambda batch: ...
+) # Pass your method that takes SubscriptionBatch[_T] as an argument, with your logic in it
+`}
+
+
+
+
+
+## Worker that operates with a session
+
+Here we create a subscription that sends _Order_ documents that do not have a shipping date.
+The worker receiving these documents will update the `ShippedAt` field value and save the document back to the server via the session.
+
+
+
+{`subscription_name = store.subscriptions.create_for_options(
+ SubscriptionCreationOptions(query="from Orders as o where o.ShippedAt = null")
+)
+
+subscription_worker = store.subscriptions.get_subscription_worker_by_name(subscription_name, Order)
+
+def _transfer_order_callback(batch: SubscriptionBatch[Order]):
+ with batch.open_session() as session:
+ for order in (item.result for item in batch.items):
+ transfer_order_to_shipment_company(order)
+ order.shipped_at = datetime.utcnow()
+
+ # we know that we have at least one order to ship,
+ # because the subscription query above has that in it's WHERE clause
+ session.save_changes()
+
+_ = subscription_worker.run(_transfer_order_callback)
+`}
+
+
+
+
+
+## Worker that processes dynamic objects
+
+Here we define a subscription that projects the _Order_ documents into a dynamic format.
+The worker processes the dynamic objects it receives.
+
+
+
+{`subscription_name = "My dynamic subscription"
+store.subscriptions.create_for_class(
+ Order,
+ SubscriptionCreationOptions(
+ subscription_name,
+ query="""
+ From Orders as o
+ Select
+ \{
+ dynamic_field_1: "Company: " + o.Company + " Employee: " + o.Employee,
+ \}
+ """,
+ ),
+)
+
+subscription_worker = store.subscriptions.get_subscription_worker_by_name(subscription_name)
+
+def _raise_notification_callback(batch: SubscriptionBatch[Order]):
+ for item in batch.items:
+ raise_notification(item.result.dynamic_field_1)
+
+_ = subscription_worker.run(_raise_notification_callback)
+`}
+
+
+
+
+
+## Subscription that ends when no documents are left
+
+Here we create a subscription client that runs only up to the point there are no more new documents left to process.
+
+This is useful for ad-hoc, single-use processing where the user needs to ensure that all documents are fully processed.
+
+
+
+{`high_value_orders_worker = store.subscriptions.get_subscription_worker(
+ SubscriptionWorkerOptions(
+ subs_id,
+ # Here we ask the worker to stop when there are no documents left to send.
+ # Will throw SubscriptionClosedException when it finishes its job
+ close_when_no_docs_left=True,
+ ),
+ OrderAndCompany,
+)
+
+try:
+
+ def _subscription_batch_callback(batch: SubscriptionBatch[OrderAndCompany]):
+ for item in batch.items:
+ send_thank_you_note_to_employee(item.result)
+
+ high_value_orders_worker.run(_subscription_batch_callback)
+except SubscriptionClosedException:
+ # that's expected
+ ...
+`}
+
+
+
+
+
+## Subscription that uses included documents
+
+Here we create a subscription that, in addition to sending all the _Order_ documents to the worker,
+will include all the referenced _Product_ documents in the batch sent to the worker.
+
+When the worker accesses these _Product_ documents, no additional requests will be made to the server.
+
+
+
+{`// Create the subscription task on the server:
+// ===========================================
+
+var subscriptionName = store.Subscriptions.Create(new SubscriptionCreationOptions()
+\{
+ // Include the referenced Product documents for each Order document
+ Query = @"from Orders include Lines[].Product"
+\});
+
+// Create the subscription worker that will consume the documents:
+// ===============================================================
+
+var subscriptionWorker = store.Subscriptions.GetSubscriptionWorker(subscriptionName);
+_ = subscriptionWorker.Run(batch =>
+\{
+ // Open a session via 'batch.OpenSession'
+ // in order to access the Product documents
+ using (var session = batch.OpenSession())
+ \{
+ foreach (var order in batch.Items.Select(x => x.Result))
+ \{
+ foreach (var orderLine in order.Lines)
+ \{
+ // Calling Load will Not generate a request to the server,
+ // because orderLine.Product was included in the batch
+ var product = session.Load(orderLine.Product);
+
+ ProcessOrderAndProduct(order, product); // call your custom method
+ \}
+ \}
+ \}
+\});
+`}
+
+
+
+
+
+## Subscription workers with failover on other nodes
+
+In this configuration, any available node will create a worker.
+If the worker fails, another available node will take over.
+
+
+
+{`worker = store.subscriptions.get_subscription_worker(
+ SubscriptionWorkerOptions(subscription_name, strategy=SubscriptionOpeningStrategy.WAIT_FOR_FREE), Order
+)
+`}
+
+
+
+
+
+## Primary and secondary workers
+
+Here we create two workers:
+
+* The primary worker, with a `TAKE_OVER` strategy, will take over the other worker and establish the connection.
+* The secondary worker, with a `WAIT_FOR_FREE` strategy, will wait for the first worker to fail (due to machine failure, etc.).
+
+The primary worker:
+
+
+{`primary_worker = store.subscriptions.get_subscription_worker(SubscriptionWorkerOptions(subscription_name, strategy=SubscriptionOpeningStrategy.TAKE_OVER), Order)
+
+while True:
+ try:
+ run_future = primary_worker.run(lambda batch: ...) # your logic
+ except Exception:
+ ... # retry
+`}
+
+
+
+The secondary worker:
+
+
+{`secondary_worker = store.subscriptions.get_subscription_worker(SubscriptionWorkerOptions(subscription_name), strategy=SubscriptionOpeningStrategy.WAIT_FOR_FREE)
+
+while True:
+ try:
+ run_future = secondary_worker.run(lambda batch: ...) # your logic
+ except Exception:
+ ... # retry
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/data-subscriptions/consumption/_how-to-consume-data-subscription-csharp.mdx b/versioned_docs/version-7.1/client-api/data-subscriptions/consumption/_how-to-consume-data-subscription-csharp.mdx
new file mode 100644
index 0000000000..b5114a2de6
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/data-subscriptions/consumption/_how-to-consume-data-subscription-csharp.mdx
@@ -0,0 +1,198 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Batches of documents sent from a Subscription Task defined on the server are consumed and processed by a subscription worker client.
+
+* The `SubscriptionWorker` object, defined on the client, manages the communication between the server and the client and processes the document batches sent from the server.
+
+* There are several ways to create and configure the SubscriptionWorker - see [SubscriptionWorkerOptions](../../../client-api/data-subscriptions/consumption/api-overview.mdx#subscriptionworkeroptions).
+
+* In this page:
+ * [SubscriptionWorker lifecycle](../../../client-api/data-subscriptions/consumption/how-to-consume-data-subscription.mdx#subscriptionworker-lifecycle)
+ * [Error handling](../../../client-api/data-subscriptions/consumption/how-to-consume-data-subscription.mdx#error-handling)
+ * [Worker strategies](../../../client-api/data-subscriptions/consumption/how-to-consume-data-subscription.mdx#worker-strategies)
+ * [Determining which workers a subscription will serve](../../../client-api/data-subscriptions/consumption/how-to-consume-data-subscription.mdx#determining-which-workers-a-subscription-will-serve)
+
+
+## SubscriptionWorker lifecycle
+
+A `SubscriptionWorker` object starts its life from being generated by the `DocumentsStore.Subscriptions`:
+
+
+{`subscriptionWorker = store.Subscriptions.GetSubscriptionWorker(subscriptionName);
+`}
+
+
+
+At this point, the worker has only got its configuration. No connection or processing happens at this moment.
+To start processing, the `Run` method should be called. The Run method receives the batch processing logic that should be performed:
+
+
+{`subscriptionRuntimeTask = subscriptionWorker.Run(batch =>
+\{
+ // your logic here
+\});
+`}
+
+
+
+From this point on, the subscription worker will start processing batches.
+If processing is aborted for any reason, the returned task (`subscriptionRuntimeTask`) will complete with an exception.
+
+
+
+## Error handling
+
+
+
+Subscription worker connection failures may occur during the routine communication between the worker and the server.
+When an unexpected error arises, the worker will attempt to **reconnect to the server**.
+
+However, there are several conditions under which the worker will stop its operation but will Not attempt to reconnect:
+
+* The subscription no longer exists or has been deleted.
+* Another worker has taken control of the subscription (see [connection strategy](../../../client-api/data-subscriptions/consumption/how-to-consume-data-subscription.mdx#available-worker-strategies)).
+* The worker is unable to connect to any of the servers.
+* The worker could not receive the node responsible for the task
+ (this can happen when there is no leader in the cluster).
+* An authorization exception occurred.
+* An exception occurred during the connection establishment phase.
+* The database doesn't exist.
+
+
+
+
+
+An exception may occur while processing a batch of documents in the worker.
+For example:
+
+
+
+{`_ = workerWBatch.Run(x => throw new Exception());
+`}
+
+
+
+When creating a worker, the worker can be configured to handle these exceptions in either of the following ways,
+depending on the `IgnoreSubscriberErrors` property in [SubscriptionWorkerOptions](../../../client-api/data-subscriptions/consumption/api-overview.mdx#subscriptionworkeroptions):
+
+* **Abort processing completely**
+ When `IgnoreSubscriberErrors = false` (default):
+ The current batch processing will be aborted, and in this case, the worker will wrap the thrown exception in a `SubscriberErrorException` and will rethrow it.
+ Processing of the subscription will be terminated without acknowledging progress to the server or retrying to connect.
+ As a result, the task returned by the `Run` function will complete in an erroneous state, throwing a _SubscriberErrorException_.
+
+* **Continue processing subsequent batches**
+ When `IgnoreSubscriberErrors = true`:
+ The current batch processing will be aborted; however, the erroneous batch will be acknowledged without retrying,
+ and processing will continue with the next batches.
+
+
+
+
+
+Two properties in the [SubscriptionWorkerOptions](../../../client-api/data-subscriptions/consumption/api-overview.mdx#subscriptionworkeroptions)
+object control the behavior of a worker attempting to reconnect with the server:
+
+* `TimeToWaitBeforeConnectionRetry`
+ The time the worker will wait before attempting to reconnect.
+ Default: 5 seconds.
+* `MaxErroneousPeriod`
+ The maximum amount of time the subscription connection can remain in an erroneous state.
+ Once this period is exceeded, the worker will stop trying to reconnect.
+ Default: 5 minutes.
+
+
+
+
+
+A worker will time out after losing its connectivity with the server for a given time period.
+
+* The timeout period can be set using the `ConnectionStreamTimeout` option. E.g.:
+
+
+{`var options = new SubscriptionWorkerOptions(subscriptionName);
+
+// Set the worker's timeout period
+options.ConnectionStreamTimeout = TimeSpan.FromSeconds(45);
+`}
+
+
+* Default timeout period: 30 second
+
+
+
+
+
+`OnUnexpectedSubscriptionError` is the event that is triggered when a connection failure occurs between the subscription worker and the server,
+resulting in an unexpected exception.
+When this happens, the worker will automatically attempt to reconnect.
+This event is useful for logging these unexpected exceptions.
+
+
+
+
+
+## Worker strategies
+
+Subscription workers are configured with a **strategy** that determines whether multiple workers
+can connect to the subscription concurrently or if only one worker can connect at a time.
+
+The _one-worker-at-a-time_ strategy also determines how the workers interact with each other
+to resolve which will establish the subscription connection.
+### One worker per subscription strategies
+
+The following three strategies allow only a **single worker to connect to the subscription at any given time**,
+and determine what happens when one worker is connected and another tries to connect.
+
+* `SubscriptionOpeningStrategy.OpenIfFree`
+ The server will allow a worker to connect only if no other worker is currently connected.
+ If there is an existing connection, the incoming worker will throw a `SubscriptionInUseException`.
+* `SubscriptionOpeningStrategy.WaitForFree`
+ If the worker cannot open the subscription because it is in use by another worker, it will wait for the currently connected worker to disconnect before establishing the connection.
+ This is useful in worker failover scenarios, where one worker is connected while another is awaiting its turn to take its place.
+* `SubscriptionOpeningStrategy.TakeOver`
+ The server will allow an incoming connection to take over an existing one,
+ based on the connection strategy in use by the currently connected worker:
+ * If the existing connection **does not** have a `TakeOver` strategy:
+ The incoming connection will take over, causing the existing connection to throw a `SubscriptionInUseException`.
+ * If the existing connection **has** a `TakeOver` strategy:
+ The incoming connection will throw a `SubscriptionInUseException` exception.
+### Multiple workers per subscription strategy
+
+* `SubscriptionOpeningStrategy.Concurrent`
+ The server allows multiple workers to connect to the same subscription **concurrently**.
+ Read more about concurrent subscriptions [here](../../../client-api/data-subscriptions/concurrent-subscriptions.mdx).
+
+
+
+## Determining which workers a subscription will serve
+
+
+
+The **strategy used by the first worker connecting to a subscription** determines
+which additional workers the subscription can serve until all worker connections are dropped.
+
+
+
+* A subscription that serves one or more [concurrent](../../../client-api/data-subscriptions/consumption/how-to-consume-data-subscription.mdx#multiple-workers-per-subscription-strategy) workers,
+ **can only serve other concurrent workers** until all connections are dropped.
+ If a worker with a [one worker per subscription](../../../client-api/data-subscriptions/consumption/how-to-consume-data-subscription.mdx#one-worker-per-subscription-strategies)
+ strategy attempts to connect -
+ * The connection attempt will be rejected.
+ * `SubscriptionInUseException` will be thrown.
+
+* A subscription that serves a worker with a [one worker per subscription](../../../client-api/data-subscriptions/consumption/how-to-consume-data-subscription.mdx#one-worker-per-subscription-strategies) strategy,
+ **cannot** serve [concurrent](../../../client-api/data-subscriptions/consumption/how-to-consume-data-subscription.mdx#multiple-workers-per-subscription-strategy)
+ workers until that worker's connection is dropped.
+ If a concurrent worker attempts to connect -
+ * The connection attempt will be rejected.
+ * `SubscriptionInUseException` will be thrown.
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/data-subscriptions/consumption/_how-to-consume-data-subscription-java.mdx b/versioned_docs/version-7.1/client-api/data-subscriptions/consumption/_how-to-consume-data-subscription-java.mdx
new file mode 100644
index 0000000000..5c80940a9f
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/data-subscriptions/consumption/_how-to-consume-data-subscription-java.mdx
@@ -0,0 +1,129 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+Subscriptions are consumed by processing batches of documents received from the server.
+A `SubscriptionWorker` object manages the documents processing and the communication between the client and the server according to a set of configurations received upon it's creation.
+We've introduced several ways to create and configure a SubscriptionWorker, starting from just giving a subscription name, and ending with a detailed configuration object - `SubscriptionWorkerOptions`.
+
+* In this page:
+ * [SubscriptionWorker lifecycle](../../../client-api/data-subscriptions/consumption/how-to-consume-data-subscription.mdx#subscriptionworker-lifecycle)
+ * [Error handling](../../../client-api/data-subscriptions/consumption/how-to-consume-data-subscription.mdx#error-handling)
+ * [Worker strategies](../../../client-api/data-subscriptions/consumption/how-to-consume-data-subscription.mdx#worker-strategies)
+
+
+## SubscriptionWorker lifecycle
+
+A `SubscriptionWorker` object starts its life from being generated by the `DocumentsStore.subscriptions`:
+
+
+
+{`subscriptionWorker = store.subscriptions().getSubscriptionWorker(Order.class, subscriptionName);
+`}
+
+
+
+At this point, the worker has only got its configuration. No connection or processing happens at this moment.
+In order to start processing, the `run` method should be called. The `run` method receives the batch processing logic that should be performed:
+
+
+
+{`subscriptionRuntimeTask = subscriptionWorker.run(batch -> \{
+ // your logic here
+\});
+`}
+
+
+
+From this point on, the subscription worker will start processing batches. If for any reason, the processing is aborted, the returned task (`subscriptionRuntimeTask`) will complete with an exception.
+
+
+
+## Error handling
+
+
+
+Subscription worker connection failures may occur during the routine communication between the worker and the server.
+When an unexpected error arises, the worker will attempt to **reconnect to the server**.
+
+However, there are several conditions under which the worker will stop its operation but will Not attempt to reconnect:
+
+* The subscription no longer exists or has been deleted.
+* Another worker has taken control of the subscription (see [connection strategy](../../../client-api/data-subscriptions/consumption/how-to-consume-data-subscription.mdx#available-worker-strategies)).
+* The worker is unable to connect to any of the servers.
+* The worker could not receive the node responsible for the task
+ (this can happen when there is no leader in the cluster).
+* An authorization exception occurred.
+* An exception occurred during the connection establishment phase.
+* The database doesn't exist.
+
+
+
+
+
+An exception may occur while processing a batch of documents in the worker.
+For example:
+
+
+
+{`workerWBatch.run(x -> \{
+ throw new RuntimeException();
+\});
+`}
+
+
+
+When creating a worker, the worker can be configured to handle these exceptions in either of the following ways,
+depending on the `IgnoreSubscriberErrors` property in [SubscriptionWorkerOptions](../../../client-api/data-subscriptions/consumption/api-overview.mdx#subscriptionworkeroptions):
+
+* **Abort processing completely**
+ When `IgnoreSubscriberErrors` is set to _false_ (default):
+ The current batch processing will be aborted, and in this case, the worker will wrap the thrown exception in a `SubscriberErrorException` and will rethrow it.
+ Processing of the subscription will be terminated without acknowledging progress to the server or retrying to connect.
+ As a result, the task returned by the `Run` function will complete in an erroneous state, throwing a _SubscriberErrorException_.
+
+* **Continue processing subsequent batches**
+ When `IgnoreSubscriberErrors` is set to _true_:
+ The current batch processing will be aborted; however, the erroneous batch will be acknowledged without retrying,
+ and processing will continue with the next batches.
+
+
+
+
+
+Two properties in the [SubscriptionWorkerOptions](../../../client-api/data-subscriptions/consumption/api-overview.mdx#subscriptionworkeroptions)
+object control the behavior of a worker attempting to reconnect with the server:
+
+* `timeToWaitBeforeConnectionRetry`
+ The time the worker will wait before attempting to reconnect.
+ Default: 5 seconds.
+* `maxErroneousPeriod`
+ The maximum amount of time the subscription connection can remain in an erroneous state.
+ Once this period is exceeded, the worker will stop trying to reconnect.
+ Default: 5 minutes.
+
+
+
+
+
+## Worker strategies
+
+There can only be one active subscription worker working on a subscription.
+Nevertheless, there are scenarios where it is required to interact between an existing subscription worker and one that tries to connect.
+This relationship and interoperation is configured by the `SubscriptionConnectionOptions` `Strategy` field.
+The strategy field is an enum, having the following values:
+
+* `OPEN_IF_FREE` - the server will allow the worker to connect only if there isn't any other currently connected workers.
+ If there is a existing connection, the incoming worker will throw a SubscriptionInUseException.
+* `WAIT_FOR_FREE` - If the client currently cannot open the subscription because it is used by another client, it will wait for the previous client to disconnect and only then will connect.
+ This is useful in client failover scenarios where there is one active client and another one already waiting to take its place.
+* `TAKE_OVER` - the server will allow an incoming connection to overthrow an existing one. It will behave according to the existing connection strategy:
+ * The existing connection has a strategy that is not `TAKE_OVER`. In this case, the incoming connection will take over it causing the existing connection to throw a SubscriptionInUseException exception.
+ * The existing connection has a strategy that is `TAKE_OVER`. In this case, the incoming connection will throw a SubscriptionInUseException exception.
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/data-subscriptions/consumption/_how-to-consume-data-subscription-nodejs.mdx b/versioned_docs/version-7.1/client-api/data-subscriptions/consumption/_how-to-consume-data-subscription-nodejs.mdx
new file mode 100644
index 0000000000..07a785a3ad
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/data-subscriptions/consumption/_how-to-consume-data-subscription-nodejs.mdx
@@ -0,0 +1,204 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Batches of documents sent from a Subscription Task defined on the server are consumed and processed by a subscription worker client.
+
+* The `SubscriptionWorker` object, defined on the client, manages the communication between the server and the client and processes the document batches sent from the server.
+
+* There are several ways to create and configure the SubscriptionWorker - see [subscription worker options](../../../client-api/data-subscriptions/consumption/api-overview.mdx#subscriptionworkeroptions).
+
+* In this page:
+ * [SubscriptionWorker lifecycle](../../../client-api/data-subscriptions/consumption/how-to-consume-data-subscription.mdx#subscriptionworker-lifecycle)
+ * [Error handling](../../../client-api/data-subscriptions/consumption/how-to-consume-data-subscription.mdx#error-handling)
+ * [Worker strategies](../../../client-api/data-subscriptions/consumption/how-to-consume-data-subscription.mdx#worker-strategies)
+ * [Determining which workers a subscription will serve](../../../client-api/data-subscriptions/consumption/how-to-consume-data-subscription.mdx#determining-which-workers-a-subscription-will-serve)
+
+
+## SubscriptionWorker lifecycle
+
+Create a `SubscriptionWorker` object by calling `getSubscriptionWorker`:
+
+
+
+{`const worker = documentStore.subscriptions.getSubscriptionWorker(\{
+ subscriptionName: "your subscription name"
+\});
+`}
+
+
+
+At this stage, the worker is initialized, no connection to the server or document processing occurs yet.
+
+To start handling documents from the subscription, you need to define a listener for the `batch` event.
+This event is triggered whenever a new batch of documents is received.
+
+Add an event handler using `on` method of the worker object to process incoming batches:
+
+
+
+{`worker.on("batch", (batch, callback) => \{
+ try \{
+ // Add your logic for processing the incoming batch items here...
+
+ // Call 'callback' once you're done
+ // The worker will send an acknowledgement to the server,
+ // allowing the server to send the next batch
+ callback();
+
+ \} catch(err) \{
+ // If processing fails for a particular batch then pass the error to the callback
+ callback(err);
+ \}
+\});
+`}
+
+
+
+Once the event handler is defined, the worker will begin processing batches of documents sent by the server.
+Each batch must be acknowledged by calling `callback()` once processing is complete.
+
+
+
+## Error handling
+
+
+
+Subscription worker connection failures may occur during the routine communication between the worker and the server.
+When an unexpected error arises, the worker will attempt to **reconnect to the server**.
+
+However, there are several conditions under which the worker will stop its operation but will Not attempt to reconnect:
+
+* The subscription no longer exists or has been deleted.
+* Another worker has taken control of the subscription (see [connection strategy](../../../client-api/data-subscriptions/consumption/how-to-consume-data-subscription.mdx#available-worker-strategies)).
+* The worker is unable to connect to any of the servers.
+* The worker could not receive the node responsible for the task
+ (this can happen when there is no leader in the cluster).
+* An authorization exception occurred.
+* An exception occurred during the connection establishment phase.
+* The database doesn't exist.
+
+
+
+
+
+An exception may occur while processing a batch of documents in the worker.
+For example:
+
+
+
+{`worker.on("batch", (batch, callback) => \{
+ try \{
+ throw new Error("Exception occurred");
+ \} catch (err) \{
+ callback(err); // Pass the error to the callback to signal failure
+ \}
+\});
+`}
+
+
+
+When creating a worker, the worker can be configured to handle these exceptions in either of the following ways,
+depending on the `ignoreSubscriberErrors` property in the [subscription worker options](../../../client-api/data-subscriptions/consumption/api-overview.mdx#subscriptionworkeroptions):
+
+* **Abort processing completely**
+ When `ignoreSubscriberErrors` is `false` (default):
+ The current batch processing will be aborted, and in this case, the worker will wrap the thrown exception in a `SubscriberErrorException` and will rethrow it.
+ Processing of the subscription will be terminated without acknowledging progress to the server or retrying to connect.
+ As a result, the worker task will complete in an erroneous state, throwing a _SubscriberErrorException_.
+
+* **Continue processing subsequent batches**
+ When `ignoreSubscriberErrors` is `true`:
+ The current batch processing will be aborted; however, the erroneous batch will be acknowledged without retrying,
+ and processing will continue with the next batches.
+
+
+
+
+
+Two properties in the [subscription worker options](../../../client-api/data-subscriptions/consumption/api-overview.mdx#subscriptionworkeroptions)
+object control the behavior of a worker attempting to reconnect with the server:
+
+* `timeToWaitBeforeConnectionRetry`
+ The time the worker will wait before attempting to reconnect.
+ Default: 5 seconds.
+* `maxErroneousPeriod`
+ The maximum amount of time the subscription connection can remain in an erroneous state.
+ Once this period is exceeded, the worker will stop trying to reconnect.
+ Default: 5 minutes.
+
+
+
+
+
+`unexpectedSubscriptionError` is the event that is triggered when a connection failure occurs between the subscription worker and the server,
+resulting in an unexpected exception.
+When this happens, the worker will automatically attempt to reconnect.
+This event is useful for logging these unexpected exceptions.
+
+
+
+
+
+## Worker strategies
+
+Subscription workers are configured with a **strategy** that determines whether multiple workers
+can connect to the subscription concurrently or if only one worker can connect at a time.
+
+The _one-worker-at-a-time_ strategy also determines how the workers interact with each other
+to resolve which will establish the subscription connection.
+### One worker per subscription strategies
+
+The following three strategies allow only a **single worker to connect to the subscription at any given time**,
+and determine what happens when one worker is connected and another tries to connect.
+
+* `OpenIfFree`
+ The server will allow a worker to connect only if no other worker is currently connected.
+ If there is an existing connection, the incoming worker will throw a `SubscriptionInUseException`.
+* `WaitForFree`
+ If the worker cannot open the subscription because it is in use by another worker, it will wait for the currently connected worker to disconnect before establishing the connection.
+ This is useful in worker failover scenarios, where one worker is connected while another is awaiting its turn to take its place.
+* `TakeOver`
+ The server will allow an incoming connection to take over an existing one,
+ based on the connection strategy in use by the currently connected worker:
+ * If the existing connection **does not** have a `TakeOver` strategy:
+ The incoming connection will take over, causing the existing connection to throw a `SubscriptionInUseException`.
+ * If the existing connection **has** a `TakeOver` strategy:
+ The incoming connection will throw a `SubscriptionInUseException` exception.
+### Multiple workers per subscription strategy
+
+* `Concurrent`
+ The server allows multiple workers to connect to the same subscription **concurrently**.
+ Read more about concurrent subscriptions [here](../../../client-api/data-subscriptions/concurrent-subscriptions.mdx).
+
+
+
+## Determining which workers a subscription will serve
+
+
+
+The **strategy used by the first worker connecting to a subscription** determines
+which additional workers the subscription can serve until all worker connections are dropped.
+
+
+
+* A subscription that serves one or more [concurrent](../../../client-api/data-subscriptions/consumption/how-to-consume-data-subscription.mdx#multiple-workers-per-subscription-strategy) workers,
+ **can only serve other concurrent workers** until all connections are dropped.
+ If a worker with a [one worker per subscription](../../../client-api/data-subscriptions/consumption/how-to-consume-data-subscription.mdx#one-worker-per-subscription-strategies)
+ strategy attempts to connect -
+ * The connection attempt will be rejected.
+ * `SubscriptionInUseException` will be thrown.
+
+* A subscription that serves a worker with a [one worker per subscription](../../../client-api/data-subscriptions/consumption/how-to-consume-data-subscription.mdx#one-worker-per-subscription-strategies) strategy,
+ **cannot** serve [concurrent](../../../client-api/data-subscriptions/consumption/how-to-consume-data-subscription.mdx#multiple-workers-per-subscription-strategy)
+ workers until that worker's connection is dropped.
+ If a concurrent worker attempts to connect -
+ * The connection attempt will be rejected.
+ * `SubscriptionInUseException` will be thrown.
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/data-subscriptions/consumption/_how-to-consume-data-subscription-python.mdx b/versioned_docs/version-7.1/client-api/data-subscriptions/consumption/_how-to-consume-data-subscription-python.mdx
new file mode 100644
index 0000000000..9404ade81c
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/data-subscriptions/consumption/_how-to-consume-data-subscription-python.mdx
@@ -0,0 +1,182 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Batches of documents sent from a Subscription Task defined on the server are consumed and processed by a subscription worker client.
+
+* The `subscription_worker` object, defined on the client, manages the communication between the server and the client and processes the document batches sent from the server.
+
+* There are several ways to create and configure the SubscriptionWorker - see `SubscriptionWorkerOptions`.
+
+* In this page:
+ * [`subscription_worker` lifecycle](../../../client-api/data-subscriptions/consumption/how-to-consume-data-subscription.mdx#subscription_worker-lifecycle)
+ * [Error handling](../../../client-api/data-subscriptions/consumption/how-to-consume-data-subscription.mdx#error-handling)
+ * [Worker strategies](../../../client-api/data-subscriptions/consumption/how-to-consume-data-subscription.mdx#worker-strategies)
+ * [Determining which workers a subscription will serve](../../../client-api/data-subscriptions/consumption/how-to-consume-data-subscription.mdx#determining-which-workers-a-subscription-will-serve)
+
+
+## `subscription_worker` lifecycle
+
+A `subscription_worker` object starts its life from being generated by the `store.subscriptions`:
+
+
+{`subscription_worker = store.subscriptions.get_subscription_worker_by_name(subscription_name, Order)
+`}
+
+
+
+At this point, the worker has only got its configuration. No connection or processing happens at this moment.
+To start processing, the `run` method should be called. The Run method receives the batch processing logic that should be performed:
+
+
+{`subscription_runtime_task = subscription_worker.run(
+ process_documents=lambda batch: ...
+) # Pass your method that takes SubscriptionBatch[_T] as an argument, with your logic in it
+`}
+
+
+
+From this point on, the subscription worker will start processing batches.
+If processing is aborted for any reason, the returned task (`subscription_runtime_task`) will complete with an exception.
+
+
+
+## Error handling
+
+
+
+Subscription worker connection failures may occur during the routine communication between the worker and the server.
+When an unexpected error arises, the worker will attempt to **reconnect to the server**.
+
+However, there are several conditions under which the worker will stop its operation but will Not attempt to reconnect:
+
+* The subscription no longer exists or has been deleted.
+* Another worker has taken control of the subscription (see [connection strategy](../../../client-api/data-subscriptions/consumption/how-to-consume-data-subscription.mdx#available-worker-strategies)).
+* The worker is unable to connect to any of the servers.
+* The worker could not receive the node responsible for the task
+ (this can happen when there is no leader in the cluster).
+* An authorization exception occurred.
+* An exception occurred during the connection establishment phase.
+* The database doesn't exist.
+
+
+
+
+
+An exception may occur while processing a batch of documents in the worker.
+For example:
+
+
+
+{`def _throw_exception(batch: SubscriptionBatch):
+ raise Exception()
+
+_ = worker_w_batch.run(_throw_exception)
+`}
+
+
+
+When creating a worker, the worker can be configured to handle these exceptions in either of the following ways,
+depending on the `ignore_subscriber_errors` property in [SubscriptionWorkerOptions](../../../client-api/data-subscriptions/consumption/api-overview.mdx#subscriptionworkeroptions):
+
+* **Abort processing completely**
+ When `ignore_subscriber_errors` is set to _false_ (default):
+ The current batch processing will be aborted, and in this case, the worker will wrap the thrown exception in a `SubscriberErrorException` and will rethrow it.
+ Processing of the subscription will be terminated without acknowledging progress to the server or retrying to connect.
+ As a result, the task returned by the `Run` function will complete in an erroneous state, throwing a _SubscriberErrorException_.
+
+* **Continue processing subsequent batches**
+ When `ignore_subscriber_errors` is set to _true_:
+ The current batch processing will be aborted; however, the erroneous batch will be acknowledged without retrying,
+ and processing will continue with the next batches.
+
+
+
+
+
+Two properties in the [SubscriptionWorkerOptions](../../../client-api/data-subscriptions/consumption/api-overview.mdx#subscriptionworkeroptions)
+object control the behavior of a worker attempting to reconnect with the server:
+
+* `time_to_wait_before_connection_retry`
+ The time the worker will wait before attempting to reconnect.
+ Default: 5 seconds.
+* `max_erroneous_period`
+ The maximum amount of time the subscription connection can remain in an erroneous state.
+ Once this period is exceeded, the worker will stop trying to reconnect.
+ Default: 5 minutes.
+
+
+
+
+
+`on_unexpected_subscription_error` is the event that is triggered when a connection failure occurs between the subscription worker and the server,
+resulting in an unexpected exception.
+When this happens, the worker will automatically attempt to reconnect.
+This event is useful for logging these unexpected exceptions.
+
+
+
+
+
+## Worker strategies
+
+Subscription workers are configured with a **strategy** that determines whether multiple workers
+can connect to the subscription concurrently or if only one worker can connect at a time.
+
+The _one-worker-at-a-time_ strategy also determines how the workers interact with each other
+to resolve which will establish the subscription connection.
+### One worker per subscription strategies
+
+The following three strategies allow only a **single worker to connect to the subscription at any given time**,
+and determine what happens when one worker is connected and another tries to connect.
+
+* `SubscriptionOpeningStrategy.OPEN_IF_FREE`
+ The server will allow a worker to connect only if no other worker is currently connected.
+ If there is an existing connection, the incoming worker will throw a `SubscriptionInUseException`.
+* `SubscriptionOpeningStrategy.WAIT_FOR_FREE`
+ If the worker cannot open the subscription because it is in use by another worker, it will wait for the currently connected worker to disconnect before establishing the connection.
+ This is useful in worker failover scenarios, where one worker is connected while another is awaiting its turn to take its place.
+* `SubscriptionOpeningStrategy.TAKE_OVER`
+ The server will allow an incoming connection to take over an existing one,
+ based on the connection strategy in use by the currently connected worker:
+ * If the existing connection **does not** have a `TAKE_OVER` strategy:
+ The incoming connection will take over, causing the existing connection to throw a `SubscriptionInUseException`.
+ * If the existing connection **has** a `TAKE_OVER` strategy:
+ The incoming connection will throw a `SubscriptionInUseException` exception.
+### Multiple workers per subscription strategy
+
+* `SubscriptionOpeningStrategy.CONCURRENT`
+ The server allows multiple workers to connect to the same subscription **concurrently**.
+ Read more about concurrent subscriptions [here](../../../client-api/data-subscriptions/concurrent-subscriptions.mdx).
+
+
+
+## Determining which workers a subscription will serve
+
+
+
+The **strategy used by the first worker connecting to a subscription** determines
+which additional workers the subscription can serve until all worker connections are dropped.
+
+
+
+* A subscription that serves one or more [CONCURRENT](../../../client-api/data-subscriptions/consumption/how-to-consume-data-subscription.mdx#multiple-workers-per-subscription-strategy) workers,
+ **can only serve other concurrent workers** until all connections are dropped.
+ If a worker with a [one worker per subscription](../../../client-api/data-subscriptions/consumption/how-to-consume-data-subscription.mdx#one-worker-per-subscription-strategies)
+ strategy attempts to connect -
+ * The connection attempt will be rejected.
+ * `SubscriptionInUseException` will be thrown.
+
+* A subscription that serves a worker with a [one worker per subscription](../../../client-api/data-subscriptions/consumption/how-to-consume-data-subscription.mdx#one-worker-per-subscription-strategies) strategy,
+ **cannot** serve [CONCURRENT](../../../client-api/data-subscriptions/consumption/how-to-consume-data-subscription.mdx#multiple-workers-per-subscription-strategy)
+ workers until that worker's connection is dropped.
+ If a concurrent worker attempts to connect -
+ * The connection attempt will be rejected.
+ * `SubscriptionInUseException` will be thrown.
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/data-subscriptions/consumption/api-overview.mdx b/versioned_docs/version-7.1/client-api/data-subscriptions/consumption/api-overview.mdx
new file mode 100644
index 0000000000..b424f128a4
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/data-subscriptions/consumption/api-overview.mdx
@@ -0,0 +1,44 @@
+---
+title: "Consume Subscriptions API"
+hide_table_of_contents: true
+sidebar_label: API Overview
+sidebar_position: 2
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import ApiOverviewCsharp from './_api-overview-csharp.mdx';
+import ApiOverviewJava from './_api-overview-java.mdx';
+import ApiOverviewPython from './_api-overview-python.mdx';
+import ApiOverviewNodejs from './_api-overview-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "python", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/data-subscriptions/consumption/examples.mdx b/versioned_docs/version-7.1/client-api/data-subscriptions/consumption/examples.mdx
new file mode 100644
index 0000000000..9db13b374b
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/data-subscriptions/consumption/examples.mdx
@@ -0,0 +1,44 @@
+---
+title: "Subscription Consumption Examples"
+hide_table_of_contents: true
+sidebar_label: Examples
+sidebar_position: 1
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import ExamplesCsharp from './_examples-csharp.mdx';
+import ExamplesJava from './_examples-java.mdx';
+import ExamplesPython from './_examples-python.mdx';
+import ExamplesNodejs from './_examples-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "python", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/data-subscriptions/consumption/how-to-consume-data-subscription.mdx b/versioned_docs/version-7.1/client-api/data-subscriptions/consumption/how-to-consume-data-subscription.mdx
new file mode 100644
index 0000000000..04ac04347d
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/data-subscriptions/consumption/how-to-consume-data-subscription.mdx
@@ -0,0 +1,48 @@
+---
+title: "How to Consume a Data Subscription"
+hide_table_of_contents: true
+sidebar_label: How to Consume a Data Subscription
+sidebar_position: 0
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import HowToConsumeDataSubscriptionCsharp from './_how-to-consume-data-subscription-csharp.mdx';
+import HowToConsumeDataSubscriptionJava from './_how-to-consume-data-subscription-java.mdx';
+import HowToConsumeDataSubscriptionPython from './_how-to-consume-data-subscription-python.mdx';
+import HowToConsumeDataSubscriptionNodejs from './_how-to-consume-data-subscription-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "python", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/data-subscriptions/creation/_api-overview-csharp.mdx b/versioned_docs/version-7.1/client-api/data-subscriptions/creation/_api-overview-csharp.mdx
new file mode 100644
index 0000000000..d7da1cefc3
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/data-subscriptions/creation/_api-overview-csharp.mdx
@@ -0,0 +1,277 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* In this page:
+ * [Create subscription](../../../client-api/data-subscriptions/creation/api-overview.mdx#create-subscription)
+ * [Subscription creation options](../../../client-api/data-subscriptions/creation/api-overview.mdx#subscription-creation-options)
+ * [Update subscription](../../../client-api/data-subscriptions/creation/api-overview.mdx#update-subscription)
+ * [Subscription update options](../../../client-api/data-subscriptions/creation/api-overview.mdx#subscription-update-options)
+ * [Subscription query](../../../client-api/data-subscriptions/creation/api-overview.mdx#subscription-query)
+
+
+## Create subscription
+
+Subscriptions can be created using the following `Create` methods available through the `Subscriptions` property of the `DocumentStore`.
+
+
+
+{`string Create(SubscriptionCreationOptions options,
+ string database = null);
+
+string Create(SubscriptionCreationOptions options = null,
+ string database = null);
+
+string Create(SubscriptionCreationOptions options,
+ string database = null);
+
+string Create(Expression> predicate = null,
+ PredicateSubscriptionCreationOptions options = null,
+ string database = null);
+
+Task CreateAsync(SubscriptionCreationOptions options,
+ string database = null,
+ CancellationToken token = default);
+
+public Task CreateAsync(SubscriptionCreationOptions options = null,
+ string database = null,
+ CancellationToken token = default);
+
+Task CreateAsync(SubscriptionCreationOptions options,
+ string database = null,
+ CancellationToken token = default);
+
+Task CreateAsync(Expression> predicate = null,
+ PredicateSubscriptionCreationOptions options = null,
+ string database = null,
+ CancellationToken token = default);
+`}
+
+
+
+| Parameter | Type | Description |
+|----------------|-----------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **predicate** | `Expression>` | An optional lambda expression that returns a boolean. This predicate defines the filter criteria for the subscription documents. |
+| **options** | `SubscriptionCreationOptions` | Contains subscription creation options. See [Subscription creation options](../../../client-api/data-subscriptions/creation/api-overview.mdx#subscription-creation-options) |
+| **options** | `SubscriptionCreationOptions` | Contains subscription creation options (non-generic version). See [Subscription creation options](../../../client-api/data-subscriptions/creation/api-overview.mdx#subscription-creation-options) |
+| **options** | `PredicateSubscriptionCreationOptions ` | Contains subscription creation options (when passing a predicate). See [Subscription creation options](../../../client-api/data-subscriptions/creation/api-overview.mdx#subscription-creation-options) |
+| **database** | `string` | The name of the database where the subscription task will be created. If `null`, the default database configured in the DocumentStore will be used. |
+| **token** | `CancellationToken` | Cancellation token used in to halt the subscription creation process. |
+
+| Return value | Description |
+|---------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `string` | The name of the created data subscription. If the name was provided in `SubscriptionCreationOptions`, it will be returned. Otherwise, a unique name will be generated by server. |
+
+
+
+## Subscription creation options
+
+
+
+Options for the **generic** version of the subscription creation options object:
+
+
+{`public class SubscriptionCreationOptions
+\{
+ public string Name \{ get; set; \}
+ public Expression> Filter \{ get; set; \}
+ public Expression> Projection \{ get; set; \}
+ public Action> Includes \{ get; set; \}
+ public string ChangeVector \{ get; set; \}
+ public bool Disabled \{ get; set; \}
+ public string MentorNode \{ get; set; \}
+ public bool PinToMentorNode \{ get; set; \}
+ public ArchivedDataProcessingBehavior? ArchivedDataProcessingBehavior \{ get; set; \}
+\}
+`}
+
+
+
+
+
+
+Options for the **non-generic** version of the subscription creation options object:
+
+
+{`public class SubscriptionCreationOptions
+\{
+ public string Name \{ get; set; \}
+ public string Query \{ get; set; \}
+ public string ChangeVector \{ get; set; \}
+ public virtual bool Disabled \{ get; set; \}
+ public string MentorNode \{ get; set; \}
+ public virtual bool PinToMentorNode \{ get; set; \}
+ public ArchivedDataProcessingBehavior? ArchivedDataProcessingBehavior \{ get; set; \}
+\}
+`}
+
+
+
+
+
+
+Options for the **non-generic** version of the subscription creation options object when passing a **predicate**:
+
+
+{`public sealed class PredicateSubscriptionCreationOptions
+\{
+ public string Name \{ get; set; \}
+ public string ChangeVector \{ get; set; \}
+ public bool Disabled \{ get; set; \}
+ public string MentorNode \{ get; set; \}
+ public bool PinToMentorNode \{ get; set; \}
+ public ArchivedDataProcessingBehavior? ArchivedDataProcessingBehavior \{ get; set; \}
+\}
+`}
+
+
+
+
+
+| Member | Type | Description |
+|------------------------------------|------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **<T>** | `T` | Type of object from which the collection of documents managed by the subscription will be derived. |
+| **Name** | `string` | User defined name for the subscription. The name must be unique in the database. |
+| **Query** | `string` | RQL query that defines the subscription. This RQL comes with additional support to JavaScript functions inside the `where` clause and special semantics for subscriptions on documents revisions. |
+| **Filter** | `Expression>` | Lambda expression defining the filter logic for the subscription. Will be translated to a JavaScript function. |
+| **Projection** | `Expression>` | Lambda expression defining the projection that will be sent by the subscription for each matching document. Will be translated to a JavaScript function. |
+| **Includes** | `Action>` | An action that defines include clauses for the subscription. [Included documents](../../../client-api/data-subscriptions/creation/examples.mdx#create-subscription---include-documents) and/or [included counters](../../../client-api/data-subscriptions/creation/examples.mdx#create-subscription---include-counters) will be part of the batch sent by the subscription. Include methods can be chained. |
+| **ChangeVector** | `string` | Allows to define a change vector from which the subscription will start processing. Learn more [below](../../../client-api/data-subscriptions/creation/api-overview.mdx#the--property). |
+| **Disabled** | `bool` | `true` - task will be disabled. `false` - task will be enabled. |
+| **MentorNode** | `string` | Allows to define a node in the cluster that will be responsible to handle the subscription. Useful when you prefer a specific server due to its stronger hardware, closer geographic proximity to clients, or other reasons. |
+| **PinToMentorNode** | `bool` | `true` - the selected responsible node will be pinned to handle the task. `false` - Another node will execute the task if the responsible node is down. |
+| **ArchivedDataProcessingBehavior** | `ArchivedDataProcessingBehavior?` | Define whether [archived documents](../../../data-archival/archived-documents-and-other-features.mdx#archived-documents-and-subscriptions) will be included in the subscription. |
+
+
+
+###### The `ChangeVector` property:
+
+* The _ChangeVector_ property allows you to define a starting point from which the subscription will begin processing changes.
+* This is useful for ad-hoc processes that need to process only recent changes. In such cases, you can:
+ * Set the field to _"LastDocument"_ to start processing from the latest document in the collection.
+ * Or, provide an actual Change Vector to begin processing from a specific point.
+* By default, the subscription will send all documents matching the RQL query, regardless of their creation time.
+
+
+
+
+
+## Update subscription
+
+Existing subscriptions can be modified using the following `Update` methods available through the `Subscriptions` property of the `DocumentStore`.
+
+
+
+{`string Update(SubscriptionUpdateOptions options, string database = null);
+
+Task UpdateAsync(SubscriptionUpdateOptions options, string database = null,
+ CancellationToken token = default);
+`}
+
+
+
+| Parameter | Type | Description |
+|--------------|-----------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **options** | `SubscriptionUpdateOptions` | The subscription update options object. See [SubscriptionUpdateOptions](../../../client-api/data-subscriptions/creation/api-overview.mdx#subscriptionupdateoptions) |
+| **database** | `string` | The name of the database where the subscription task resides. If `null`, the default database configured in the DocumentStore will be used. |
+| **token** | `CancellationToken` | Cancellation token used to halt the update process. |
+
+| Return value | Description |
+|---------------|--------------------------------------------|
+| `string` | The name of the updated data subscription. |
+
+
+
+## Subscription update options
+
+`SubscriptionUpdateOptions` inherits from [SubscriptionCreationOptions](../../../client-api/data-subscriptions/creation/api-overview.mdx#subscriptioncreationoptions)
+and adds two additional fields:
+
+
+
+{`public class SubscriptionUpdateOptions : SubscriptionCreationOptions
+\{
+ public long? Id \{ get; set; \}
+ public bool CreateNew \{ get; set; \}
+\}
+`}
+
+
+
+| Parameter | Type | Description |
+|---------------|---------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **Id** | `long?` | The unique ID that was assigned to the subscription by the server at creation time. You can retrieve it by [getting the subscription status](../../../client-api/data-subscriptions/advanced-topics/maintenance-operations.mdx#getting-subscription-status). When updating, the `Id` can be used instead of the `Name` field, and takes precedence over it. This allows you to modify the subscription's name: provide the Id and submit a new name in the Name field. |
+| **CreateNew** | `bool` | Determines the behavior when the subscription you wish to update does Not exist. `true` - a new subscription is created with the provided option parameters. `false` - an exception will be thrown. Default: `false` |
+
+
+
+## Subscription query
+
+All subscriptions are eventually translated to an RQL-like statement. These statements have the following parts:
+
+* Functions definition part, like in ordinary RQL. Those functions can contain any JavaScript code,
+ and also supports `load` and `include` operations.
+
+* From statement, defining the documents source, ex: `from Orders`. The from statement can only address collections, therefore, indexes are not supported.
+
+* Where statement describing the criteria according to which it will be decided to either
+ send the documents to the worker or not. Those statements support either RQL like `equality` operations (`=`, `==`) ,
+ plain JavaScript expressions or declared function calls, allowing to perform complex filtering logic.
+ The subscriptions RQL does not support any of the known RQL searching keywords.
+
+* Select statement, that defines the projection to be performed.
+ The select statements can contain function calls, allowing complex transformations.
+
+* Include statement allowing to define include path in document.
+
+
+Although subscription's query syntax has an RQL-like structure, it supports only the `declare`, `select` and `where` keywords, usage of all other RQL keywords is not supported.
+Usage of JavaScript ES5 syntax is supported.
+
+
+
+Paths in subscriptions RQL statements are treated as JavaScript indirections and not like regular RQL paths.
+It means that a query that in RQL would look like:
+
+```
+from Orders as o
+where o.Lines[].Product = "products/1-A"
+```
+
+Will look like that in subscriptions RQL:
+
+```
+declare function filterLines(doc, productId)
+{
+ if (!!doc.Lines){
+ return doc.Lines.filter(x=>x.Product == productId).length >0;
+ }
+ return false;
+}
+
+from Orders as o
+where filterLines(o, "products/1-A")
+```
+
+
+
+
+To define a data subscription that sends document revisions to the client,
+you must first [configure revisions](../../../document-extensions/revisions/overview.mdx#defining-a-revisions-configuration)
+for the specific collection managed by the subscription.
+
+The subscription should be defined in a special way:
+
+* In case of the generic API, the `SubscriptionCreationOptions<>` generic parameter should be of the generic type `Revision<>`,
+ while it's generic parameter correlates to the collection to be processed. Ex: `new SubscriptionCreationOptions>()`
+* For RQL syntax, concatenate the `(Revisions = true)` clause to the collection being queried.
+ For example: `From Orders(Revisions = true) as o`
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/data-subscriptions/creation/_api-overview-nodejs.mdx b/versioned_docs/version-7.1/client-api/data-subscriptions/creation/_api-overview-nodejs.mdx
new file mode 100644
index 0000000000..006d6e9463
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/data-subscriptions/creation/_api-overview-nodejs.mdx
@@ -0,0 +1,285 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* In this page:
+ * [Create subscription](../../../client-api/data-subscriptions/creation/api-overview.mdx#create-subscription)
+ * [Subscription creation options](../../../client-api/data-subscriptions/creation/api-overview.mdx#subscription-creation-options)
+ * [Include methods](../../../client-api/data-subscriptions/creation/api-overview.mdx#include-methods)
+ * [Update subscription](../../../client-api/data-subscriptions/creation/api-overview.mdx#update-subscription)
+ * [Subscription update options](../../../client-api/data-subscriptions/creation/api-overview.mdx#subscription-update-options)
+ * [Subscribe to revisions](../../../client-api/data-subscriptions/creation/api-overview.mdx#subscribe-to-revisions)
+ * [Subscription RQL](../../../client-api/data-subscriptions/creation/api-overview.mdx#subscription-rql)
+
+
+## Create subscription
+
+Subscriptions can be created using the following `create` methods available through the `subscriptions` property of the `DocumentStore`.
+
+
+
+{`// Available overloads:
+// ====================
+
+create(options);
+
+create(options, database);
+
+create(documentType);
+`}
+
+
+
+| Parameter | Type | Description |
+|------------------|-----------|--------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **options** | `object` | The [subscription creation options](../../../client-api/data-subscriptions/creation/api-overview.mdx#subscription-creation-options). |
+| **database** | `string` | The name of the database where the subscription task will be created. If `null`, the default database configured in the DocumentStore will be used. |
+| **documentType** | `object` | The class type from which the collection of documents managed by the subscription will be derived. |
+
+| Return value | Description |
+|---------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `Promise` | A Promise that resolves to the **name** of the created data subscription (a `string`). If the name was provided in the [subscription creation options](../../../client-api/data-subscriptions/creation/api-overview.mdx#subscription-creation-options), it will be returned. Otherwise, a unique name will be generated by server. |
+
+Examples for creating subscriptions are available [here](../../../client-api/data-subscriptions/creation/examples.mdx).
+
+
+
+## Subscription creation options
+
+
+
+{`// The SubscriptionCreationOptions object:
+// =======================================
+\{
+ name;
+ query;
+ includes;
+ changeVector;
+ mentorNode;
+ pinToMentorNode;
+ disabled;
+ documentType;
+\}
+`}
+
+
+
+| Member | Type | Description |
+|---------------------|---------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **name** | `string` | User defined name for the subscription. The name must be unique in the database. |
+| **query** | `string` | RQL query that defines the subscription. This RQL comes with additional support to JavaScript functions inside the `where` clause and special semantics for subscriptions on documents revisions. Learn more in [subscription RQL](../../../client-api/data-subscriptions/creation/api-overview.mdx#subscription-rql). |
+| **includes** | `(builder) => void` | A function that accepts a builder object, which allows you to include related documents, counters, and time series in the batch that is sent to the client. See [Include methods](../../../client-api/data-subscriptions/creation/api-overview.mdx#include-methods). |
+| **changeVector** | `string` | Allows to define a change vector from which the subscription will start processing. Useful for ad-hoc processes that need to process only recent changes. In such cases, you can set the field to _"LastDocument"_ to start processing from the latest document in the collection. |
+| **mentorNode** | `string` | Allows to define a specific node in the cluster to handle the subscription. Useful when you prefer a specific server due to its stronger hardware, closer geographic proximity to clients, or other reasons. |
+| **pinToMentorNode** | `boolean` | `true` - task will only be handled by the specified mentor node. `false` - When the specified mentor node is down, the cluster selects another node from the Database Group to handle the task. Learn more in [pinning a task](../../../server/clustering/distribution/highly-available-tasks.mdx#pinning-a-task). |
+| **disabled** | `boolean` | `true` - the created subscription will be in a disabled state. `false` (default) - the created subscription will be enabled. |
+| **documentType** | `object` | The class type from which the collection of documents managed by the subscription will be derived. |
+
+
+
+
+## Include methods
+
+**Including documents**:
+
+
+
+{`includeDocuments(path);
+`}
+
+
+
+| Parameter | Type | Description |
+|-----------|------------|----------------------------------------------------------------|
+| **path** | `string` | Path to the property which contains ID of document to include. |
+
+An example of including documents is available [here](../../../client-api/data-subscriptions/creation/examples.mdx#create-subscription---include-documents).
+**Including counters**:
+
+
+
+{`// Include a single counter
+includeCounter(name);
+
+// Include multiple counters
+includeCounters(names);
+
+// Include ALL counters from ALL documents that match the subscription criteria
+includeAllCounters();
+`}
+
+
+
+| Parameter | Type | Description |
+|------------|------------|--------------------------------------------------------------------------------------------------------------------------------------------------|
+| **name** | `string` | The name of a counter. The subscription will include all counters with this name that are contained in the documents the subscription retrieves. |
+| **names** | `string[]` | Array of counter names. |
+
+An example of including counters is available [here](../../../client-api/data-subscriptions/creation/examples.mdx#create-subscription---include-counters).
+**Including time series**:
+
+
+
+{`includeTimeSeries(name, type, time);
+includeTimeSeries(name, type, count);
+
+includeTimeSeries(names, type, time);
+includeTimeSeries(names, type, count);
+
+includeAllTimeSeries(type, time);
+includeAllTimeSeries(type, count);
+`}
+
+
+
+| Parameter | Type | Description |
+|-------------|-------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **name** | `string` | The name of the time series to include. |
+| **names** | `string[]` | The names of the time series to include. |
+| **type** | `string` | Indicates how to retrieve the time series entries. Range type can be: `"None"` or `"Last"`. When set to _Last_, retrieve the last X entries, where X is determined by _count_. |
+| **time** | `TimeValue` | The time range to consider when retrieving time series entries. E.g.: `TimeValue.ofDays(7)` |
+| **count** | `number` | The maximum number of entries to take when retrieving time series entries. |
+
+
+
+## Update subscription
+
+Existing subscriptions can be modified using the following `update` methods available through the `subscriptions` property of the `DocumentStore`.
+
+
+
+{`// Available overloads:
+// ====================
+
+update(options);
+
+update(options, database);
+`}
+
+
+
+| Parameter | Type | Description |
+|--------------|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------|
+| **options** | `object` | The [subscription update options](../../../client-api/data-subscriptions/creation/api-overview.mdx#subscription-update-options). |
+| **database** | `string` | The name of the database where the subscription task resides. If `null`, the default database configured in the DocumentStore will be used. |
+
+| Return value | Description |
+|---------------|----------------------------------------------------------------------------------------|
+| `Promise` | A Promise that resolves to the **name** of the updated data subscription (a `string`). |
+
+Examples for updating an existing subscription are available [here](../../../client-api/data-subscriptions/creation/examples.mdx#update-existing-subscription).
+
+
+
+## Subscription update options
+
+The subscription update options object extends the [creation options object](../../../client-api/data-subscriptions/creation/api-overview.mdx#subscription-creation-options)
+and adds two additional fields:
+
+
+
+{`// The SubscriptionUpdateOptions object:
+// =====================================
+\{
+ id;
+ createNew;
+\}
+`}
+
+
+
+| Parameter | Type | Description |
+|---------------|-----------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **id** | `number` | The unique ID that was assigned to the subscription by the server at creation time. You can retrieve it by [getting the subscription status](../../../client-api/data-subscriptions/advanced-topics/maintenance-operations.mdx#getting-subscription-status). When updating, the `id` can be used instead of the `name` field, and takes precedence over it. This allows you to modify the subscription's name: provide the id and submit a new name in the name field. |
+| **createNew** | `boolean` | Determines the behavior when the subscription you wish to update does Not exist. `true` - a new subscription is created with the provided option parameters. `false` - an exception will be thrown. Default: `false` |
+
+
+
+
+## Subscribe to revisions
+
+To define a data subscription that sends document revisions to the client,
+you must first [configure revisions](../../../document-extensions/revisions/overview.mdx#defining-a-revisions-configuration)
+for the specific collection managed by the subscription.
+
+Create a subscription that sends document revisions using the following `createForRevisions` methods:
+
+
+
+{`// Available overloads:
+// ====================
+
+createForRevisions(options);
+
+createForRevisions(options, database);
+`}
+
+
+
+| Parameter | Type | Description |
+|--------------|----------|--------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **options** | `object` | The [subscription creation options](../../../client-api/data-subscriptions/creation/api-overview.mdx#subscription-creation-options). |
+| **database** | `string` | The name of the database where the subscription task will be created. If `null`, the default database configured in the DocumentStore will be used. |
+
+When providing raw RQL to the `query` param in the options object,
+concatenate the `(Revisions = true)` clause to the collection being queried.
+For example: `From Orders(Revisions = true) as o`
+
+Learn more about subscribing to revisions in [revisions support](../../../client-api/data-subscriptions/advanced-topics/subscription-with-revisioning.mdx).
+
+
+
+## Subscription RQL
+
+All subscriptions are eventually translated to an RQL-like statement. These statements have the following parts:
+
+* Functions definition part, like in ordinary RQL. Those functions can contain any JavaScript code,
+ and also supports `load` and `include` operations.
+
+* From statement, defining the documents source, ex: `from Orders`. The from statement can only address collections, therefore, indexes are not supported.
+
+* Where statement describing the criteria according to which it will be decided to either
+ send the documents to the worker or not. Those statements support either RQL like `equality` operations (`=`, `==`) ,
+ plain JavaScript expressions or declared function calls, allowing to perform complex filtering logic.
+ The subscriptions RQL does not support any of the known RQL searching keywords.
+
+* Select statement, that defines the projection to be performed.
+ The select statements can contain function calls, allowing complex transformations.
+
+* Include statement allowing to define include path in document.
+
+
+Although subscription's query syntax has an RQL-like structure, it supports only the `declare`, `select` and `where` keywords, usage of all other RQL keywords is not supported.
+Usage of JavaScript ES5 syntax is supported.
+
+
+
+Paths in subscriptions RQL statements are treated as JavaScript indirections and not like regular RQL paths.
+It means that a query that in RQL would look like:
+
+```
+from Orders as o
+where o.Lines[].Product = "products/1-A"
+```
+
+Will look like that in subscriptions RQL:
+
+```
+declare function filterLines(doc, productId)
+{
+ if (!!doc.Lines){
+ return doc.Lines.filter(x=>x.Product == productId).length >0;
+ }
+ return false;
+}
+
+from Orders as o
+where filterLines(o, "products/1-A")
+```
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/data-subscriptions/creation/_api-overview-python.mdx b/versioned_docs/version-7.1/client-api/data-subscriptions/creation/_api-overview-python.mdx
new file mode 100644
index 0000000000..4e619bb86c
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/data-subscriptions/creation/_api-overview-python.mdx
@@ -0,0 +1,179 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* In this page:
+ * [Create subscription](../../../client-api/data-subscriptions/creation/api-overview.mdx#create-subscription)
+ * [SubscriptionCreationOptions](../../../client-api/data-subscriptions/creation/api-overview.mdx#subscriptioncreationoptions)
+ * [Update subscription](../../../client-api/data-subscriptions/creation/api-overview.mdx#update-subscription)
+ * [SubscriptionUpdateOptions](../../../client-api/data-subscriptions/creation/api-overview.mdx#subscriptionupdateoptions)
+ * [Subscription query](../../../client-api/data-subscriptions/creation/api-overview.mdx#subscription-query)
+
+
+## Create subscription
+
+Subscriptions can be created using the `create_for_options` and `create_for_class` methods.
+
+
+{`def create_for_options(self, options: SubscriptionCreationOptions, database: Optional[str] = None) -> str: ...
+
+def create_for_class(
+ self,
+ object_type: Type[_T],
+ options: Optional[SubscriptionCreationOptions] = None,
+ database: Optional[str] = None,
+) -> str: ...
+`}
+
+
+
+| Parameter | Type | Description |
+| ------------- | ------------- | ----- |
+| **options** | `SubscriptionCreationOptions` | Contains subscription creation options |
+| **database** (Optional) | `[str]` | The name of the database where the subscription task will be created. If `None`, default database configured in DocumentStore will be used. |
+| **object_type** | `Type[_T]` | Predicate describing the subscription documents filter |
+
+| Return value | Description |
+| ------------- | ----- |
+| `str` | Created data subscription name. If the name was provided in `SubscriptionCreationOptions`, it will be returned. Otherwise, a unique name will be generated by the server. |
+
+
+
+## SubscriptionCreationOptions
+
+An RQL statement will be built based on the fields.
+
+
+{`class SubscriptionCreationOptions:
+ def __init__(
+ self,
+ name: Optional[str] = None,
+ query: Optional[str] = None,
+ includes: Optional[Callable[[SubscriptionIncludeBuilder], None]] = None,
+ change_vector: Optional[str] = None,
+ mentor_node: Optional[str] = None,
+ ):
+ self.name = name
+ self.query = query
+ self.includes = includes
+ self.change_vector = change_vector
+ self.mentor_node = mentor_node
+`}
+
+
+
+| Member | Type | Description |
+|------------------------------|:-----------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **name** (Optional) | `str` | User-defined name of the subscription: allows to have a human readable identification of a subscription. The name must be unique in the database. |
+| **query** (Optional) | `str` | RQL query that describes the subscription. This RQL comes with additional support to JavaScript functions inside the `where` clause and special semantics for subscriptions on documents revisions. |
+| **change_vector** (Optional) | `str` | Allows to define a change vector from which the subscription will start processing. Useful for ad-hoc processes that need to process only recent changes. In such cases, you can set the field to _"LastDocument"_ to start processing from the latest document in the collection. |
+| **mentor_node** (Optional) | `str` | Allows to define a specific node in the cluster to handle the subscription. Useful when you prefer a specific server due to its stronger hardware, closer geographic proximity to clients, or other reasons. |
+| **includes** (Optional) | `[Callable[[SubscriptionIncludeBuilder]` | Action with a [SubscriptionIncludeBuilder](../../../client-api/data-subscriptions/creation/examples.mdx#create-subscription---include-documents) parameter that allows you to define an include clause for the subscription. Methods can be chained to include documents as well as [counters](../../../client-api/data-subscriptions/creation/examples.mdx#create-subscription---include-counters). |
+
+
+
+## Update subscription
+
+Modifies an existing data subscription. These methods are accessible at `DocumentStore.Subscriptions`.
+
+
+
+{`def update(self, options: SubscriptionUpdateOptions, database: Optional[str] = None) -> str: ...
+`}
+
+
+
+| Parameter | Type | Description |
+| - | - | - |
+| **options** | `SubscriptionUpdateOptions` | A subscription update options object |
+| **database** (Optional) | `str` | The name of the database where the subscription task will be created. If `None`, default database configured in DocumentStore will be used. |
+
+| Return value | Description |
+| ------------- | ----- |
+| `str` | The updated data subscription's name. |
+
+
+
+## SubscriptionUpdateOptions
+
+Inherits from `SubscriptionCreationOptions` and has all the same fields (see [above](../../../client-api/data-subscriptions/creation/api-overview.mdx#subscriptioncreationoptions)) plus the two additional fields described below:
+
+
+
+{`class SubscriptionUpdateOptions(SubscriptionCreationOptions):
+ def __init__(
+ self,
+ name: Optional[str] = None,
+ query: Optional[str] = None,
+ includes: Optional[Callable[[SubscriptionIncludeBuilder], None]] = None,
+ change_vector: Optional[str] = None,
+ mentor_node: Optional[str] = None,
+ key: Optional[int] = None,
+ create_new: Optional[bool] = None,
+ ): ...
+`}
+
+
+
+| Parameter | Type | Description |
+| - | - | - |
+| **key** (Optional) | `int` | Unique server-side ID of the data subscription. `key` can be used instead of the subscription update options `name` field, and takes precedence over it. This allows you to change the subscription's name: submit a subscription's ID, and submit a different name in the `name` field. |
+| **create_new** (Optional) | `bool` | Determines the behavior when the subscription you wish to update does Not exist. `true` - a new subscription is created with the provided option parameters. `false` - an exception will be thrown. Default: `false` |
+
+
+
+## Subscription query
+
+All subscriptions are eventually translated to an RQL-like statement. These statements have the following parts:
+
+* Functions definition part, like in ordinary RQL. Those functions can contain any JavaScript code,
+ and also supports `load` and `include` operations.
+
+* From statement, defining the documents source, ex: `from Orders`. The from statement can only address collections, therefore, indexes are not supported.
+
+* Where statement describing the criteria according to which it will be decided to either
+ send the documents to the worker or not. Those statements support either RQL like `equality` operations (`=`, `==`) ,
+ plain JavaScript expressions or declared function calls, allowing to perform complex filtering logic.
+ The subscriptions RQL does not support any of the known RQL searching keywords.
+
+* Select statement, that defines the projection to be performed.
+ The select statements can contain function calls, allowing complex transformations.
+
+* Include statement allowing to define include path in document.
+
+
+Although subscription's query syntax has an RQL-like structure, it supports only the `declare`, `select` and `where` keywords, usage of all other RQL keywords is not supported.
+Usage of JavaScript ES5 syntax is supported.
+
+
+
+Paths in subscriptions RQL statements are treated as JavaScript indirections and not like regular RQL paths.
+It means that a query that in RQL would look like:
+
+```
+from Orders as o
+where o.Lines[].Product = "products/1-A"
+```
+
+Will look like that in subscriptions RQL:
+
+```
+declare function filterLines(doc, productId)
+{
+ if (!!doc.Lines){
+ return doc.Lines.filter(x=>x.Product == productId).length >0;
+ }
+ return false;
+}
+
+from Orders as o
+where filterLines(o, "products/1-A")
+```
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/data-subscriptions/creation/_category_.json b/versioned_docs/version-7.1/client-api/data-subscriptions/creation/_category_.json
new file mode 100644
index 0000000000..696f998ee4
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/data-subscriptions/creation/_category_.json
@@ -0,0 +1,4 @@
+{
+ "position": 1,
+ "label": Creation,
+}
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/data-subscriptions/creation/_examples-csharp.mdx b/versioned_docs/version-7.1/client-api/data-subscriptions/creation/_examples-csharp.mdx
new file mode 100644
index 0000000000..52c7682c15
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/data-subscriptions/creation/_examples-csharp.mdx
@@ -0,0 +1,479 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* This page contains examples of **creating a subscription**.
+ To learn how to consume and process documents sent by the subscription, see these [examples](../../../client-api/data-subscriptions/consumption/examples.mdx).
+
+* For a detailed syntax of the available subscription methods and objects, see this [API overview](../../../client-api/data-subscriptions/creation/api-overview.mdx).
+
+* In this page:
+ * [Create subscription - for all documents in a collection](../../../client-api/data-subscriptions/creation/examples.mdx#create-subscription---for-all-documents-in-a-collection)
+ * [Create subscription - filter documents](../../../client-api/data-subscriptions/creation/examples.mdx#create-subscription---filter-documents)
+ * [Create subscription - filter and project fields](../../../client-api/data-subscriptions/creation/examples.mdx#create-subscription---filter-and-project-fields)
+ * [Create subscription - project data from a related document](../../../client-api/data-subscriptions/creation/examples.mdx#create-subscription---project-data-from-a-related-document)
+ * [Create subscription - include documents](../../../client-api/data-subscriptions/creation/examples.mdx#create-subscription---include-documents)
+ * [Create subscription - include counters](../../../client-api/data-subscriptions/creation/examples.mdx#create-subscription---include-counters)
+ * [Create subscription - subscribe to revisions](../../../client-api/data-subscriptions/creation/examples.mdx#create-subscription---subscribe-to-revisions)
+ * [Create subscription - via update](../../../client-api/data-subscriptions/creation/examples.mdx#create-subscription---via-update)
+ * [Update existing subscription](../../../client-api/data-subscriptions/creation/examples.mdx#update-existing-subscription)
+
+
+## Create subscription - for all documents in a collection
+
+Here we create a plain subscription on the _Orders_ collection without any constraints or transformations.
+The server will send ALL documents from the _Orders_ collection to a client that connects to this subscription.
+
+
+
+
+{`subscriptionName = store.Subscriptions.Create(new SubscriptionCreationOptions
+{
+ // Set a custom name for the subscription
+ Name = "OrdersProcessingSubscription"
+});
+`}
+
+
+
+
+{`subscriptionName = store.Subscriptions.Create(new SubscriptionCreationOptions()
+{
+ Query = "From Orders",
+ Name = "OrdersProcessingSubscription"
+});
+`}
+
+
+
+
+
+
+## Create subscription - filter documents
+
+Here we create a subscription for documents from the _Orders_ collection where the total order revenue is greater than 100.
+Only documents that match this condition will be sent from the server to a client connected to this subscription.
+
+
+
+
+{`subscriptionName = store.Subscriptions.Create(x =>
+ // Only documents matching this criteria will be sent
+ x.Lines.Sum(line => line.PricePerUnit * line.Quantity) > 100);
+`}
+
+
+
+
+{`subscriptionName = store.Subscriptions.Create(new SubscriptionCreationOptions()
+{
+ Query = @"declare function getOrderLinesSum(doc) {
+ var sum = 0;
+ for (var i in doc.Lines) {
+ sum += doc.Lines[i].PricePerUnit * doc.Lines[i].Quantity;
+ }
+ return sum;
+ }
+
+ From Orders as o
+ Where getOrderLinesSum(o) > 100"
+});
+`}
+
+
+
+
+
+
+## Create subscription - filter and project fields
+
+Here, again, we create a subscription for documents from the _Orders_ collection where the total order revenue is greater than 100.
+However, this time we only project the document ID and the Total Revenue properties in each object sent to the client.
+
+
+
+
+{`subscriptionName = store.Subscriptions.Create(new SubscriptionCreationOptions()
+{
+ // The subscription criteria:
+ Filter = x => x.Lines.Sum(line => line.PricePerUnit * line.Quantity) > 100,
+
+ // The object properties that will be sent for each matching document:
+ Projection = x => new
+ {
+ Id = x.Id,
+ Total = x.Lines.Sum(line => line.PricePerUnit * line.Quantity)
+ }
+});
+`}
+
+
+
+
+{`subscriptionName = store.Subscriptions.Create(new SubscriptionCreationOptions()
+{
+ Query = @"declare function getOrderLinesSum(doc) {
+ var sum = 0;
+ for (var i in doc.Lines) {
+ sum += doc.Lines[i].PricePerUnit * doc.Lines[i].Quantity;
+ }
+ return sum;
+ }
+
+ declare function projectOrder(doc) {
+ return {
+ Id: doc.Id,
+ Total: getOrderLinesSum(doc)
+ };
+ }
+
+ From Orders as o
+ Where getOrderLinesSum(o) > 100
+ Select projectOrder(o)"
+});
+`}
+
+
+
+
+
+
+## Create subscription - project data from a related document
+
+In this subscription, in addition to projecting the document fields,
+we also project data from a [related document](../../../indexes/indexing-related-documents.mdx#what-are-related-documents) that is loaded using the `Load` method.
+
+
+
+
+{`subscriptionName = store.Subscriptions.Create(
+ new SubscriptionCreationOptions()
+ {
+ // The subscription criteria:
+ Filter = x => x.Lines.Sum(line => line.PricePerUnit * line.Quantity) > 100,
+
+ // The object properties that will be sent for each matching document:
+ Projection = x => new
+ {
+ Id = x.Id,
+ Total = x.Lines.Sum(line => line.PricePerUnit * line.Quantity),
+ ShipTo = x.ShipTo,
+
+ // 'Load' the related Employee document and use its data in the projection
+ EmployeeName = RavenQuery.Load(x.Employee).FirstName + " " +
+ RavenQuery.Load(x.Employee).LastName
+ }
+ });
+`}
+
+
+
+
+{`subscriptionName = store.Subscriptions.Create(new SubscriptionCreationOptions()
+{
+ Query = @"declare function getOrderLinesSum(doc) {
+ var sum = 0;
+ for (var i in doc.Lines) {
+ sum += doc.Lines[i].PricePerUnit * doc.Lines[i].Quantity;
+ }
+ return sum;
+ }
+
+ declare function projectOrder(doc) {
+ var employee = load(doc.Employee);
+ return {
+ Id: doc.Id,
+ Total: getOrderLinesSum(doc),
+ ShipTo: doc.ShipTo,
+ EmployeeName: employee.FirstName + ' ' + employee.LastName
+ };
+ }
+
+ From Orders as o
+ Where getOrderLinesSum(o) > 100
+ Select projectOrder(o)"
+});
+`}
+
+
+
+
+
+
+## Create subscription - include documents
+
+Here we create a subscription on the _Orders_ collection, which will send all the _Order_ documents.
+
+In addition, the related _Product_ documents associated with each Order are **included** in the batch sent to the client.
+This way, when the subscription worker that processes the batch in the client accesses a _Product_ document, no additional call to the server will be made.
+
+See how to consume this type of subscription [here](../../../client-api/data-subscriptions/consumption/examples.mdx#subscription-that-uses-included-documents).
+
+
+
+
+{`subscriptionName = store.Subscriptions.Create(new SubscriptionCreationOptions()
+{
+ Includes = builder => builder
+ // The documents whose IDs are specified in the 'Product' property
+ // will be included in the batch
+ .IncludeDocuments(x => x.Lines.Select(y => y.Product))
+});
+`}
+
+
+
+
+{`subscriptionName = store.Subscriptions.Create(new SubscriptionCreationOptions()
+{
+ Query = @"from Orders include Lines[].Product"
+});
+`}
+
+
+
+
+{`subscriptionName = store.Subscriptions.Create(new SubscriptionCreationOptions()
+{
+ Query = @"declare function includeProducts(doc) {
+ let includedFields = 0;
+ let linesCount = doc.Lines.length;
+
+ for (let i = 0; i < linesCount; i++) {
+ includedFields++;
+ include(doc.Lines[i].Product);
+ }
+
+ return doc;
+ }
+
+ from Orders as o select includeProducts(o)"
+});
+`}
+
+
+
+
+
+
+**Include using builder**:
+
+Include statements can be added to the subscription with `ISubscriptionIncludeBuilder`.
+This builder is assigned to the `Includes` property in [SubscriptionCreationOptions<T>](../../../client-api/data-subscriptions/creation/api-overview.mdx#subscriptioncreationoptionst).
+It supports methods for including documents as well as [counters](../../../client-api/data-subscriptions/creation/examples.mdx#create-subscription---include-counters).
+These methods can be chained.
+
+To include related documents, use method `IncludeDocuments`.
+(See the _Builder-syntax_ tab in the example above).
+
+
+
+
+**Include using RQL**:
+
+The include statements can be written in two ways:
+
+1. Use the `include` keyword at the end of the query, followed by the paths to the fields containing the IDs of the documents to include.
+ It is recommended to prefer this approach whenever possible, both for the clarity of the query and for slightly better performance.
+ (See the _RQL-path-syntax_ tab in the example above).
+
+2. Define the `include` within a JavaScript function that is called from the `select` clause.
+ (See the _RQL-javascript-syntax_ tab in the example above).
+
+
+
+
+
+If you include documents when making a [projection](../../../client-api/data-subscriptions/creation/examples.mdx#create-subscription---filter-and-project-fields),
+the include will search for the specified paths in the projected fields rather than in the original document.
+
+
+
+
+## Create subscription - include counters
+
+Here we create a subscription on the _Orders_ collection, which will send all the _Order_ documents.
+In addition, values for the specified counters will be **included** in the batch.
+
+Note:
+Modifying an existing counter's value after the document has been sent to the client does Not trigger re-sending.
+However, adding a new counter to the document or removing an existing one will trigger re-sending the document.
+
+
+
+
+{`subscriptionName = store.Subscriptions.Create(new SubscriptionCreationOptions()
+{
+ Includes = builder => builder
+ // Values for the specified counters will be included in the batch
+ .IncludeCounters(new[] { "Pros", "Cons" })
+});
+`}
+
+
+
+
+{`subscriptionName = store.Subscriptions.Create(new SubscriptionCreationOptions()
+{
+ Query = @"from Orders include counters('Pros'), counters('Cons')"
+});
+`}
+
+
+
+
+`ISubscriptionIncludeBuilder` has three methods for including counters:
+
+
+
+{`// Include a single counter
+ISubscriptionIncludeBuilder IncludeCounter(string name);
+
+// Include multiple counters
+ISubscriptionIncludeBuilder IncludeCounters(string[] names);
+
+// Include ALL counters from ALL documents that match the subscription criteria
+ISubscriptionIncludeBuilder IncludeAllCounters();
+`}
+
+
+
+| Parameter | Type | Description |
+|------------|------------|--------------------------------------------------------------------------------------------------------------------------------------------------|
+| **name** | `string` | The name of a counter. The subscription will include all counters with this name that are contained in the documents the subscription retrieves. |
+| **names** | `string[]` | Array of counter names. |
+
+**All include methods can be chained**:
+For example, the following subscription includes multiple counters and documents:
+
+
+
+{`subscriptionName = store.Subscriptions.Create(new SubscriptionCreationOptions()
+\{
+ Includes = builder => builder
+ .IncludeCounter("Likes")
+ .IncludeCounters(new[] \{ "Pros", "Cons" \})
+ .IncludeDocuments("Employee")
+\});
+`}
+
+
+
+
+
+## Create subscription - subscribe to revisions
+
+Here we create a simple revisions subscription on the _Orders_ collection that will send pairs of subsequent document revisions to the client.
+
+
+
+
+{`subscriptionName = store.Subscriptions.Create(
+ // Use > as the type for the processed items
+ // e.g. >
+ new SubscriptionCreationOptions>());
+`}
+
+
+
+
+{`subscriptionName = store.Subscriptions.Create(new SubscriptionCreationOptions()
+{
+ // Add (Revisions = true) to your subscription RQL
+ Query = @"From Orders (Revisions = true)"
+});
+`}
+
+
+
+
+Learn more about subscribing to document revisions in [subscriptions: revisions support](../../../client-api/data-subscriptions/advanced-topics/subscription-with-revisioning.mdx).
+
+
+
+## Create subscription - via update
+
+When attempting to update a subscription that does Not exist,
+you can request a new subscription to be created by setting `CreateNew` to `true`.
+In such a case, a new subscription will be created with the provided query.
+
+
+
+{`subscriptionName = store.Subscriptions.Update(new SubscriptionUpdateOptions()
+\{
+ Name = "my subscription",
+ Query = "from Products where PricePerUnit > 20",
+
+ // Set to true so that a new subscription will be created
+ // if a subscription with name "mySubscription" does Not exist
+ CreateNew = true
+\});
+`}
+
+
+
+
+
+## Update existing subscription
+
+**Update subscription by name**:
+The subscription definition can be updated after it has been created.
+In this example we update the filtering **query** of an existing subscription named "my subscription".
+
+
+
+{`subscriptionName = store.Subscriptions.Update(new SubscriptionUpdateOptions()
+\{
+ // Specify the subscription you wish to modify
+ Name = "my subscription",
+
+ // Provide a new query
+ Query = "from Products where PricePerUnit > 50"
+\});
+`}
+
+
+**Update subscription by id**:
+In addition to the subscription name, each subscription is assigned a subscription ID when it is created by the server.
+This ID can be used instead of the name when updating the subscription.
+
+
+
+{`// Get the subscription's ID
+SubscriptionState mySubscription = store.Subscriptions.GetSubscriptionState("my subscription");
+long subscriptionId = mySubscription.SubscriptionId;
+
+// Update the subscription
+subscriptionName = store.Subscriptions.Update(new SubscriptionUpdateOptions()
+\{
+ Id = subscriptionId,
+ Query = "from Products where PricePerUnit > 50"
+\});
+`}
+
+
+
+Using the subscription ID allows you to modify the subscription name:
+
+
+
+{`// Get the subscription's ID
+mySubscription = store.Subscriptions.GetSubscriptionState("my subscription");
+subscriptionId = mySubscription.SubscriptionId;
+
+// Update the subscription name
+subscriptionName = store.Subscriptions.Update(new SubscriptionUpdateOptions()
+\{
+ Id = subscriptionId,
+ Name = "New name"
+\});
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/data-subscriptions/creation/_examples-nodejs.mdx b/versioned_docs/version-7.1/client-api/data-subscriptions/creation/_examples-nodejs.mdx
new file mode 100644
index 0000000000..0c67863693
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/data-subscriptions/creation/_examples-nodejs.mdx
@@ -0,0 +1,417 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* This page contains examples of **creating a subscription**.
+ To learn how to consume and process documents sent by the subscription, see these [examples](../../../client-api/data-subscriptions/consumption/examples.mdx).
+
+* For a detailed syntax of the available subscription methods and objects, see this [API overview](../../../client-api/data-subscriptions/creation/api-overview.mdx).
+
+* In this page:
+ * [Create subscription - for all documents in a collection](../../../client-api/data-subscriptions/creation/examples.mdx#create-subscription---for-all-documents-in-a-collection)
+ * [Create subscription - filter documents](../../../client-api/data-subscriptions/creation/examples.mdx#create-subscription---filter-documents)
+ * [Create subscription - filter and project fields](../../../client-api/data-subscriptions/creation/examples.mdx#create-subscription---filter-and-project-fields)
+ * [Create subscription - project data from a related document](../../../client-api/data-subscriptions/creation/examples.mdx#create-subscription---project-data-from-a-related-document)
+ * [Create subscription - include documents](../../../client-api/data-subscriptions/creation/examples.mdx#create-subscription---include-documents)
+ * [Create subscription - include counters](../../../client-api/data-subscriptions/creation/examples.mdx#create-subscription---include-counters)
+ * [Create subscription - subscribe to revisions](../../../client-api/data-subscriptions/creation/examples.mdx#create-subscription---subscribe-to-revisions)
+ * [Create subscription - via update](../../../client-api/data-subscriptions/creation/examples.mdx#create-subscription---via-update)
+ * [Update existing subscription](../../../client-api/data-subscriptions/creation/examples.mdx#update-existing-subscription)
+
+
+## Create subscription - for all documents in a collection
+
+Here we create a plain subscription on the _Orders_ collection without any constraints or transformations.
+The server will send ALL documents from the _Orders_ collection to a client that connects to this subscription.
+
+
+
+{`const subscriptionName = await documentStore.subscriptions.create(\{
+ // Optionally, provide a custom name for the subscription
+ name: "OrdersProcessingSubscription",
+
+ // You can provide the collection name in the RQL string in the 'query' param
+ query: "from Orders"
+\});
+`}
+
+
+
+
+{`const subscriptionName = await documentStore.subscriptions.create(\{
+ name: "OrdersProcessingSubscription",
+
+ // Or, you can provide the document type for the collection in the 'documentType' param
+ documentType: Order
+\});
+`}
+
+
+
+
+{`// Or, you can use the folllowing overload,
+// pass the document class type to the 'create' method
+const subscriptionName = await documentStore.subscriptions.create(Order);
+`}
+
+
+
+
+
+## Create subscription - filter documents
+
+Here we create a subscription for documents from the _Orders_ collection where the total order revenue is greater than 100.
+Only documents that match this condition will be sent from the server to a client connected to this subscription.
+
+
+
+{`// Define the filtering criteria
+const query = \`
+ declare function getOrderLinesSum(doc) \{
+ var sum = 0;
+ for (var i in doc.Lines) \{
+ sum += doc.Lines[i].PricePerUnit * doc.Lines[i].Quantity;
+ \}
+ return sum;
+ \}
+
+ from Orders as o
+ where getOrderLinesSum(o) > 100\`;
+
+// Create the subscription with the defined query
+const subscriptionName = await documentStore.subscriptions.create(\{ query \});
+
+// In this case, the server will create a default name for the subscription
+// since no specific name was provided when creating the subscription.
+`}
+
+
+
+
+
+## Create subscription - filter and project fields
+
+Here, again, we create a subscription for documents from the _Orders_ collection where the total order revenue is greater than 100.
+However, this time we only project the document ID and the Total Revenue properties in each object sent to the client.
+
+
+
+{`const query = \`
+ declare function getOrderLinesSum(doc) \{
+ var sum = 0;
+ for (var i in doc.Lines) \{
+ sum += doc.Lines[i].PricePerUnit * doc.Lines[i].Quantity;
+ \}
+ return sum;
+ \}
+
+ declare function projectOrder(doc) \{
+ return \{
+ Id: doc.Id,
+ Total: getOrderLinesSum(doc)
+ \}
+ \}
+
+ from order as o
+ where getOrderLinesSum(o) > 100
+ select projectOrder(o)\`;
+
+const subscriptionName = await documentStore.subscriptions.create(\{ query \});
+`}
+
+
+
+
+
+## Create subscription - project data from a related document
+
+In this subscription, in addition to projecting the document fields,
+we also project data from a [related document](../../../indexes/indexing-related-documents.mdx#what-are-related-documents) that is loaded using the `load` method.
+
+
+
+{`const query = \`
+ declare function getOrderLinesSum(doc) \{
+ var sum = 0;
+ for (var i in doc.Lines) \{
+ sum += doc.Lines[i].PricePerUnit * doc.Lines[i].Quantity;
+ \}
+ return sum;
+ \}
+
+ declare function projectOrder(doc) \{
+ var employee = load(doc.Employee);
+ return \{
+ Id: doc.Id,
+ Total: getOrderLinesSum(doc),
+ ShipTo: doc.ShipTo,
+ EmployeeName: employee.FirstName + ' ' + employee.LastName
+ \}
+ \}
+
+ from order as o
+ where getOrderLinesSum(o) > 100
+ select projectOrder(o)\`;
+
+const subscriptionName = await documentStore.subscriptions.create(\{ query \});
+`}
+
+
+
+
+
+## Create subscription - include documents
+
+Here we create a subscription on the _Orders_ collection, which will send all the _Order_ documents.
+
+In addition, the related _Product_ documents associated with each Order are **included** in the batch sent to the client.
+This way, when the subscription worker that processes the batch in the client accesses a _Product_ document, no additional call to the server will be made.
+
+See how to consume this type of subscription [here](../../../client-api/data-subscriptions/consumption/examples.mdx#subscription-that-uses-included-documents).
+
+
+
+
+{`const options = {
+ // The documents whose IDs are specified in the 'Product' property
+ // will be included in the batch
+ includes: builder => builder.includeDocuments("Lines[].Product"),
+ documentType: Order
+};
+
+const subscriptionName = await documentStore.subscriptions.create(options);
+`}
+
+
+
+
+{`const query = \`from Orders include Lines[].Product\`;
+const subscriptionName = await documentStore.subscriptions.create({ query });
+`}
+
+
+
+
+{`const query = \`
+ declare function includeProducts(doc) {
+ let includedFields = 0;
+ let linesCount = doc.Lines.length;
+
+ for (let i = 0; i < linesCount; i++) {
+ includedFields++;
+ include(doc.Lines[i].Product);
+ }
+
+ return doc;
+ }
+
+ from Orders as o select includeProducts(o)\`;
+
+const subscriptionName = await documentStore.subscriptions.create({ query });
+`}
+
+
+
+
+
+
+**Include using builder**:
+
+Include statements can be added to the subscription with a _builder_ object.
+This builder is assigned to the `includes` property in the _options_ object.
+It supports methods for including documents as well as [counters](../../../client-api/data-subscriptions/creation/examples.mdx#create-subscription---include-counters).
+These methods can be chained.
+
+See this [API overview](../../../client-api/data-subscriptions/creation/api-overview.mdx#include-methods) for all available include methods.
+
+To include related documents, use method `includeDocuments`.
+(See the _Builder-syntax_ tab in the example above).
+
+
+
+
+**Include using RQL**:
+
+The include statements can be written in two ways:
+
+1. Use the `include` keyword at the end of the query, followed by the paths to the fields containing the IDs of the documents to include.
+ It is recommended to prefer this approach whenever possible, both for the clarity of the query and for slightly better performance.
+ (See the _RQL-path-syntax_ tab in the example above).
+
+2. Define the `include` within a JavaScript function that is called from the `select` clause.
+ (See the _RQL-javascript-syntax_ tab in the example above).
+
+
+
+
+
+If you include documents when making a [projection](../../../client-api/data-subscriptions/creation/examples.mdx#create-subscription---filter-and-project-fields),
+the include will search for the specified paths in the projected fields rather than in the original document.
+
+
+
+
+## Create subscription - include counters
+
+Here we create a subscription on the _Orders_ collection, which will send all the _Order_ documents.
+In addition, values for the specified counters will be **included** in the batch.
+
+Note:
+Modifying an existing counter's value after the document has been sent to the client does Not trigger re-sending.
+However, adding a new counter to the document or removing an existing one will trigger re-sending the document.
+
+
+
+
+{`const options = {
+ includes: builder => builder
+ // Values for the specified counters will be included in the batch
+ .includeCounters(["Pros", "Cons"]),
+ documentType: Order
+};
+
+const subscriptionName = await documentStore.subscriptions.create(options);
+`}
+
+
+
+
+{`const options = {
+ query: "from Orders include counters('Pros'), counters('Cons')"
+};
+
+const subscriptionName = await documentStore.subscriptions.create(options);
+`}
+
+
+
+
+**All include methods can be chained**:
+For example, the following subscription includes multiple counters and documents:
+
+
+
+{`const options = \{
+ includes: builder => builder
+ .includeCounter("Likes")
+ .includeCounters(["Pros", "Cons"])
+ .includeDocuments("Employee"),
+ documentType: Order
+\};
+
+const subscriptionName = await documentStore.subscriptions.create(options);
+`}
+
+
+
+
+
+## Create subscription - subscribe to revisions
+
+Here we create a simple revisions subscription on the _Orders_ collection that will send pairs of subsequent document revisions to the client.
+
+
+
+
+{`const subscriptionName = await documentStore.subscriptions.createForRevisions({
+ documentType: Order
+});
+`}
+
+
+
+
+{`const subscriptionName = await documentStore.subscriptions.createForRevisions({
+ query: "from Orders (Revisions = true)"
+});
+`}
+
+
+
+
+Learn more about subscribing to document revisions in [subscriptions: revisions support](../../../client-api/data-subscriptions/advanced-topics/subscription-with-revisioning.mdx).
+
+
+
+## Create subscription - via update
+
+When attempting to update a subscription that does Not exist,
+you can request a new subscription to be created by setting `createNew` to `true`.
+In such a case, a new subscription will be created with the provided query.
+
+
+
+{`const subscriptionName = await documentStore.subscriptions.update(\{
+ name: "my subscription",
+ query: "from Products where PricePerUnit > 20",
+
+ // Set to true so that a new subscription will be created
+ // if a subscription with name "my subscription" does Not exist
+ createNew: true
+\});
+`}
+
+
+
+
+
+## Update existing subscription
+
+**Update subscription by name**:
+The subscription definition can be updated after it has been created.
+In this example we update the filtering **query** of an existing subscription named "my subscription".
+
+
+
+{`const subscriptionName = await documentStore.subscriptions.update(\{
+ // Specify the subscription you wish to modify
+ name: "my subscription",
+
+ // Provide a new query
+ query: "from Products where PricePerUnit > 50"
+\});
+`}
+
+
+**Update subscription by id**:
+In addition to the subscription name, each subscription is assigned a subscription ID when it is created by the server.
+This ID can be used instead of the name when updating the subscription.
+
+
+
+{`// Get the subscription's ID
+const mySubscription = await documentStore.subscriptions.getSubscriptionState("my subscription");
+const subscriptionId = mySubscription.subscriptionId;
+
+// Update the subscription
+const subscriptionName = await documentStore.subscriptions.update(\{
+ id: subscriptionId,
+ query: "from Products where PricePerUnit > 50"
+\});
+`}
+
+
+
+Using the subscription ID allows you to modify the subscription name:
+
+
+
+{`// Get the subscription's ID
+const mySubscription = await documentStore.subscriptions.getSubscriptionState("my subscription");
+const subscriptionId = mySubscription.subscriptionId;
+
+// Update the subscription's name
+const subscriptionName = await documentStore.subscriptions.update(\{
+ id: subscriptionId,
+ name: "new name"
+\});
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/data-subscriptions/creation/_examples-python.mdx b/versioned_docs/version-7.1/client-api/data-subscriptions/creation/_examples-python.mdx
new file mode 100644
index 0000000000..8a45037cb4
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/data-subscriptions/creation/_examples-python.mdx
@@ -0,0 +1,322 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* This page contains examples of **creating a subscription**.
+ To learn how to consume and process documents sent by the subscription, see these [examples](../../../client-api/data-subscriptions/consumption/examples.mdx).
+
+* For a detailed syntax of the available subscription methods and objects, see this [API overview](../../../client-api/data-subscriptions/creation/api-overview.mdx).
+
+* In this page:
+ * [Create subscription - for all documents in a collection](../../../client-api/data-subscriptions/creation/examples.mdx#create-subscription---for-all-documents-in-a-collection)
+ * [Create subscription - filter documents](../../../client-api/data-subscriptions/creation/examples.mdx#create-subscription---filter-documents)
+ * [Create subscription - filter and project fields](../../../client-api/data-subscriptions/creation/examples.mdx#create-subscription---filter-and-project-fields)
+ * [Create subscription - project data from a related document](../../../client-api/data-subscriptions/creation/examples.mdx#create-subscription---project-data-from-a-related-document)
+ * [Create subscription - include documents](../../../client-api/data-subscriptions/creation/examples.mdx#create-subscription---include-documents)
+ * [Create subscription - include counters](../../../client-api/data-subscriptions/creation/examples.mdx#create-subscription---include-counters)
+ * [Update existing subscription](../../../client-api/data-subscriptions/creation/examples.mdx#update-existing-subscription)
+
+
+## Create subscription - for all documents in a collection
+
+Here we create a plain subscription on the _Orders_ collection without any constraints or transformations.
+The server will send ALL documents from the _Orders_ collection to a client that connects to this subscription.
+
+
+
+
+{`name = store.subscriptions.create_for_class(
+ Order, SubscriptionCreationOptions(name="OrdersProcessingSubscription")
+)
+`}
+
+
+
+
+{`name = store.subscriptions.create_for_options(SubscriptionCreationOptions(query="From Orders"))
+`}
+
+
+
+
+
+
+## Create subscription - filter documents
+
+Here we create a subscription for documents from the _Orders_ collection where the total order revenue is greater than 100.
+Only documents that match this condition will be sent from the server to a client connected to this subscription.
+
+
+
+{`name = store.subscriptions.create_for_options(
+ SubscriptionCreationOptions(
+ query=(
+ "declare function getOrderLinesSum(doc) \{"
+ " var sum = 0;"
+ " for (var i in doc.Lines) \{"
+ " sum += doc.Lines[i].PricePerUnit * doc.Lines[i].Quantity;"
+ " \}"
+ " return sum;"
+ "\}"
+ "From Orders as o "
+ "Where getOrderLinesSum(o) > 100 "
+ )
+ ),
+)
+`}
+
+
+
+
+
+## Create subscription - filter and project fields
+
+Here, again, we create a subscription for documents from the _Orders_ collection where the total order revenue is greater than 100.
+However, this time we only project the document ID and the Total Revenue properties in each object sent to the client.
+
+
+
+{`name = store.subscriptions.create_for_options(
+ SubscriptionCreationOptions(
+ query="""
+ declare function getOrderLinesSum(doc) \{
+ var sum = 0;
+ for (var i in doc.Lines) \{
+ sum += doc.Lines[i].PricePerUnit * doc.Lines[i].Quantity;
+ \}
+ return sum;
+ \}
+
+ declare function projectOrder(doc) \{
+ return \{
+ Id: doc.Id,
+ Total: getOrderLinesSum(doc)
+ \};
+ \}
+
+ From Orders as o
+ Where getOrderLinesSum(o) > 100
+ Select projectOrder(o)
+ """
+ )
+)
+`}
+
+
+
+
+
+## Create subscription - project data from a related document
+
+In this subscription, in addition to projecting the document fields,
+we also project data from a [related document](../../../indexes/indexing-related-documents.mdx#what-are-related-documents) that is loaded using the `load` method.
+
+
+
+{`name = store.subscriptions.create_for_options(
+ SubscriptionCreationOptions(
+ query="""
+ declare function getOrderLinesSum(doc) \{
+ var sum = 0;
+ for (var i in doc.Lines) \{
+ sum += doc.Lines[i].PricePerUnit * doc.Lines[i].Quantity;
+ \}
+ return sum;
+ \}
+
+ declare function projectOrder(doc) \{
+ var employee = load(doc.Employee);
+ return \{
+ Id: doc.Id,
+ Total: getOrderLinesSum(doc),
+ ShipTo: doc.ShipTo,
+ EmployeeName: employee.FirstName + ' ' + employee.LastName
+ \};
+ \}
+
+ From Orders as o
+ Where getOrderLinesSum(o) > 100
+ Select projectOrder(o)
+ """
+ )
+)
+`}
+
+
+
+
+
+## Create subscription - include documents
+
+Here we create a subscription on the _Orders_ collection, which will send all the _Order_ documents.
+
+In addition, the related _Product_ documents associated with each Order are **included** in the batch sent to the client.
+This way, when the subscription worker that processes the batch in the client accesses a _Product_ document, no additional call to the server will be made.
+
+See how to consume this type of subscription [here](../../../client-api/data-subscriptions/consumption/examples.mdx#subscription-that-uses-included-documents).
+
+
+
+
+{`store.subscriptions.create_for_class(
+ Order,
+ SubscriptionCreationOptions(includes=lambda builder: builder.include_documents("Lines[].Product")),
+)
+`}
+
+
+
+
+{`store.subscriptions.create_for_options(
+ SubscriptionCreationOptions(query="from Orders include Lines[].Product")
+)
+`}
+
+
+
+
+{`store.subscriptions.create_for_options(
+ SubscriptionCreationOptions(
+ query="""
+ declare function includeProducts(doc) {
+ let includedFields = 0;
+ let linesCount = doc.Lines.length;
+
+ for (let i = 0; i < linesCount; i++) {
+ includedFields++;
+ include(doc.Lines[i].Product);
+ }
+
+ return doc;
+ }
+
+ from Orders as o select includeProducts(o)
+ """
+ )
+)
+`}
+
+
+
+
+
+
+**Include using builder**:
+
+Include statements can be added to the subscription with `SubscriptionIncludeBuilder`.
+This builder is assigned to the `includes` property in [SubscriptionCreationOptions](../../../client-api/data-subscriptions/creation/api-overview.mdx#subscriptioncreationoptionst).
+It supports methods for including documents as well as [counters](../../../client-api/data-subscriptions/creation/examples.mdx#create-subscription---include-counters).
+These methods can be chained.
+
+To include related documents, use method `include_documents`.
+(See the _Builder-syntax_ tab in the example above).
+
+
+
+
+**Include using RQL**:
+
+The include statements can be written in two ways:
+
+1. Use the `include` keyword at the end of the query, followed by the paths to the fields containing the IDs of the documents to include.
+ It is recommended to prefer this approach whenever possible, both for the clarity of the query and for slightly better performance.
+ (See the _RQL-path-syntax_ tab in the example above).
+
+2. Define the `include` within a JavaScript function that is called from the `select` clause.
+ (See the _RQL-javascript-syntax_ tab in the example above).
+
+
+
+
+
+If you include documents when making a [projection](../../../client-api/data-subscriptions/creation/examples.mdx#create-subscription---filter-and-project-fields),
+the include will search for the specified paths in the projected fields rather than in the original document.
+
+
+
+
+
+## Create subscription - include counters
+
+`SubscriptionIncludeBuilder` has three methods for including counters:
+
+
+
+{`def include_counter(self, name: str) -> SubscriptionIncludeBuilder: ...
+
+def include_counters(self, *names: str) -> SubscriptionIncludeBuilder: ...
+
+def include_all_counters(self) -> SubscriptionIncludeBuilder: ...
+`}
+
+
+
+`include_counter` is used to specify a single counter.
+`include_counters` is used to specify multiple counters.
+`include_all_counters` retrieves all counters from all subscribed documents.
+
+| Parameter | Type | Description |
+|-------------|-------|--------------------------------------------------------------------------------------------------------------------------------------------------|
+| **name** | `str` | The name of a counter. The subscription will include all counters with this name that are contained in the documents the subscription retrieves. |
+| **\*names** | `str` | Array of counter names. |
+
+The following subscription, which includes multiple counters in the batch sent to the client,
+demonstrates how the methods can be chained.
+
+
+
+{`store.subscriptions.create_for_class(
+ Order,
+ SubscriptionCreationOptions(
+ includes=lambda builder: builder
+ .include_counter("Likes")
+ .include_counters("Pros", "Cons")
+ ),
+)
+`}
+
+
+
+
+
+## Update existing subscription
+
+The subscription definition can be updated after it has been created.
+In this example we update the filtering query of an existing subscription named "my subscription".
+
+
+
+{`store.subscriptions.update(SubscriptionUpdateOptions(
+ name="My subscription", query="from Products where PricePerUnit > 50"))
+`}
+
+
+
+
+**Modifying the subscription's name**:
+
+In addition to the subscription name, each subscription is assigned a **subscription ID** when it is created by the server.
+This ID can be used to identify the subscription, instead of the name, when updating the subscription.
+
+This allows users to change an existing subscription's **name** by specifying the subscription's ID
+and submitting a new string in the `name` field of `SubscriptionUpdateOptions`.
+
+
+
+{`my_subscription = store.subscriptions.get_subscription_state("my subscription")
+
+subscription_id = my_subscription.subscription_id
+
+store.subscriptions.update(SubscriptionUpdateOptions(key=subscription_id, name="new name"))
+`}
+
+
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/data-subscriptions/creation/_how-to-create-data-subscription-csharp.mdx b/versioned_docs/version-7.1/client-api/data-subscriptions/creation/_how-to-create-data-subscription-csharp.mdx
new file mode 100644
index 0000000000..fa87558cb4
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/data-subscriptions/creation/_how-to-create-data-subscription-csharp.mdx
@@ -0,0 +1,105 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* A subscription task can be created in two ways:
+ * **From the client API**:
+ The client can create a subscription task on the server using this [creation API](../../../client-api/data-subscriptions/creation/api-overview.mdx#subscription-creation).
+ * **From the Studio**:
+ See [creating subscription task](../../../studio/database/tasks/ongoing-tasks/subscription-task.mdx) to learn how to create a subscription task on the server via the Studio.
+
+* Once created, its definition and progress will be stored on the cluster, and not in a single server.
+
+* Upon subscription creation, the cluster will choose a preferred node that will run the subscription
+ (unless the client has stated a responsible node).
+
+* From that point and on, clients that will connect to a server in order to consume the subscription will be redirected to the node mentioned above.
+
+* In this page:
+ * [Subscription creation](../../../client-api/data-subscriptions/creation/how-to-create-data-subscription.mdx#subscription-creation)
+ * [Subscription name](../../../client-api/data-subscriptions/creation/how-to-create-data-subscription.mdx#subscription-name)
+ * [Responsible node](../../../client-api/data-subscriptions/creation/how-to-create-data-subscription.mdx#responsible-node)
+
+
+## Subscription creation
+
+Data subscription is a batch processing mechanism that sends documents that meet specific criteria to connected clients.
+
+In order to create a data subscription, we first need to define the criteria.
+The basic requirement is to specify the collection from which the subscription will retrieve documents.
+However, the criteria can be a complex RQL-like expression defining JavaScript functions that filter documents and project their content.
+
+* The following is a simple subscription definition:
+
+
+
+{`// With the following subscription definition, the server will send ALL documents
+// from the 'Orders' collection to a client that connects to this subscription.
+subscriptionName = store.Subscriptions.Create();
+`}
+
+
+
+* For more complex subscription creation scenarios, see the these [examples](../../../client-api/data-subscriptions/creation/examples.mdx).
+
+* A subscription also can be modified after it has been created, see [update existing subscription](../../../client-api/data-subscriptions/creation/examples.mdx#update-existing-subscription).
+
+
+
+
+## Subscription name
+
+In order to consume a data subscription, a subscription name is required to identify it.
+If you don't specify a name when creating the subscription, the server will automatically generate a default name.
+However, you have the option to provide a custom name for the subscription.
+
+A dedicated name can be useful for use cases like dedicated, long-running batch processing mechanisms,
+where it'll be more comfortable to use a human-readable name in the code and even use the same name between different environments
+(as long as subscription creation is taken care of upfront).
+
+
+
+{`subscriptionName = store.Subscriptions.Create(new SubscriptionCreationOptions
+\{
+ // Set a custom name for the subscription
+ Name = "OrdersProcessingSubscription"
+\});
+`}
+
+
+
+
+Note that the subscription name is unique and it will not be possible to create two subscriptions with the same name in the same database.
+
+
+
+
+## Responsible node
+
+As stated above, upon creation, the cluster will choose a node that will be responsible for managing the subscription task on the server-side.
+Once chosen, that node will be the only node to manage the subscription.
+
+There is an enterprise license level feature that supports subscription (and any other ongoing task) failover between nodes,
+but eventually, as long as the originally assigned node is online, it will be the one to manage the data subscription task.
+
+Nevertheless, there is an option to manually decide which node will be responsible for managing the subscription task.
+Provide the tag of the node you wish to be responsible in the `MentorNode` property as follows:
+
+
+
+{`subscriptionName = store.Subscriptions.Create(new SubscriptionCreationOptions
+\{
+ MentorNode = "D"
+\});
+`}
+
+
+
+Manually setting the node can help choose a more suitable server based on factors such as resources, client proximity, or other considerations.
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/data-subscriptions/creation/_how-to-create-data-subscription-java.mdx b/versioned_docs/version-7.1/client-api/data-subscriptions/creation/_how-to-create-data-subscription-java.mdx
new file mode 100644
index 0000000000..65156682d4
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/data-subscriptions/creation/_how-to-create-data-subscription-java.mdx
@@ -0,0 +1,99 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* A subscription task can be created in two ways:
+ * **From the client API**:
+ The client can create a subscription task on the server using this [creation API](../../../client-api/data-subscriptions/creation/api-overview.mdx#subscription-creation).
+ * **From the Studio**:
+ See [creating subscription task](../../../studio/database/tasks/ongoing-tasks/subscription-task.mdx) to learn how to create a subscription task on the server via the Studio.
+
+* Once created, it's definition and progress will be stored on the cluster, and not in a single server.
+
+* Upon subscription creation, the cluster will choose a preferred node that will run the subscription
+ (unless client has stated a mentor node).
+
+* From that point and on, clients that will connect to a server in order to consume the subscription will be redirected to the node mentioned above.
+
+* In this page:
+ * [Subscription creation](../../../client-api/data-subscriptions/creation/how-to-create-data-subscription.mdx#subscription-creation)
+ * [Subscription name](../../../client-api/data-subscriptions/creation/how-to-create-data-subscription.mdx#subscription-name)
+ * [Responsible node](../../../client-api/data-subscriptions/creation/how-to-create-data-subscription.mdx#responsible-node)
+
+
+## Subscription creation
+
+Data subscription is a batch processing mechanism that sends documents that meet specific criteria to connected clients.
+
+In order to create a data subscription, we first need to define the criteria.
+The basic requirement is to specify the collection from which the subscription will retrieve documents.
+However, the criteria can be a complex RQL-like expression defining JavaScript functions that filter documents and project their content.
+
+* The following is a simple subscription definition:
+
+
+
+{`// With the following subscription definition, the server will send ALL documents
+// from the 'Orders' collection to a client that connects to this subscription.
+name = store.subscriptions().create(Order.class);
+`}
+
+
+
+* For more complex subscription definitions, see these [examples](../../../client-api/data-subscriptions/creation/examples.mdx).
+
+
+
+## Subscription name
+
+In order to consume a data subscription, a subscription name is required to identify it.
+If you don't specify a name when creating the subscription, the server will automatically generate a default name.
+However, you have the option to provide a custom name for the subscription.
+
+A dedicated name can be useful for use cases like dedicated, long-running batch processing mechanisms,
+where it'll be more comfortable to use a human-readable name in the code and even use the same name between different environments
+(as long as subscription creation is taken care of upfront).
+
+
+
+{`SubscriptionCreationOptions options = new SubscriptionCreationOptions();
+options.setName("OrdersProcessingSubscription");
+name = store.subscriptions().create(Order.class, options);
+`}
+
+
+
+
+Note that subscription name is unique and it will not be possible to create two subscriptions with the same name in the same database.
+
+
+
+
+## Responsible node
+
+As stated above, upon creation, the cluster will choose a node that will be responsible for managing the subscription task on the server-side.
+Once chosen, that node will be the only node to manage the subscription.
+
+There is an enterprise license level feature that supports subscription (and any other ongoing task) failover between nodes,
+but eventually, as long as the originally assigned node is online, it will be the one to manage the data subscription task.
+
+Nevertheless, there is an option to manually decide which node will be responsible for managing the subscription task.
+Provide the tag of the node you wish to be responsible in the `MentorNode` property as follows:
+
+
+
+{`SubscriptionCreationOptions options = new SubscriptionCreationOptions();
+options.setMentorNode("D");
+name = store.subscriptions().create(Order.class, options);
+`}
+
+
+
+Manually setting the node can help choose a more suitable server based on factors such as resources, client proximity, or other considerations.
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/data-subscriptions/creation/_how-to-create-data-subscription-nodejs.mdx b/versioned_docs/version-7.1/client-api/data-subscriptions/creation/_how-to-create-data-subscription-nodejs.mdx
new file mode 100644
index 0000000000..516a3955c4
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/data-subscriptions/creation/_how-to-create-data-subscription-nodejs.mdx
@@ -0,0 +1,107 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* A subscription task can be created in two ways:
+ * **From the client API**:
+ The client can create a subscription task on the server using this [creation API](../../../client-api/data-subscriptions/creation/api-overview.mdx#subscription-creation).
+ * **From the Studio**:
+ See [creating subscription task](../../../studio/database/tasks/ongoing-tasks/subscription-task.mdx) to learn how to create a subscription task on the server via the Studio.
+
+* Once created, it's definition and progress will be stored on the cluster, and not in a single server.
+
+* Upon subscription creation, the cluster will choose a preferred node that will run the subscription
+ (unless client has stated a mentor node).
+
+* From that point and on, clients that will connect to a server in order to consume the subscription will be redirected to the node mentioned above.
+
+* In this page:
+ * [Subscription creation](../../../client-api/data-subscriptions/creation/how-to-create-data-subscription.mdx#subscription-creation)
+ * [Subscription name](../../../client-api/data-subscriptions/creation/how-to-create-data-subscription.mdx#subscription-name)
+ * [Responsible node](../../../client-api/data-subscriptions/creation/how-to-create-data-subscription.mdx#responsible-node)
+
+
+## Subscription creation
+
+Data subscription is a batch processing mechanism that sends documents that meet specific criteria to connected clients.
+
+In order to create a data subscription, we first need to define the criteria.
+The basic requirement is to specify the collection from which the subscription will retrieve documents.
+However, the criteria can be a complex RQL-like expression defining JavaScript functions that filter documents and project their content.
+
+* The following is a simple subscription definition:
+
+
+
+{`// With the following subscription definition, the server will send ALL documents
+// from the 'Orders' collection to a client that connects to this subscription.
+const subscriptionName = await documentStore.subscriptions.create(\{
+ query: "from Orders"
+\});
+`}
+
+
+
+* For more complex subscription creation scenarios, see the these [examples](../../../client-api/data-subscriptions/creation/examples.mdx).
+
+* A subscription also can be modified after it has been created, see [update existing subscription](../../../client-api/data-subscriptions/creation/examples.mdx#update-existing-subscription).
+
+
+
+## Subscription name
+
+In order to consume a data subscription, a subscription name is required to identify it.
+If you don't specify a name when creating the subscription, the server will automatically generate a default name.
+However, you have the option to provide a custom name for the subscription.
+
+A dedicated name can be useful for use cases like dedicated, long-running batch processing mechanisms,
+where it'll be more comfortable to use a human-readable name in the code and even use the same name between different environments
+(as long as subscription creation is taken care of upfront).
+
+
+
+{`const name = await store.subscriptions.create(\{
+ query: "from Orders",
+ // Set a custom name for the subscription
+ name: "OrdersProcessingSubscription"
+\});
+`}
+
+
+
+
+Note that subscription name is unique and it will not be possible to create two subscriptions with the same name in the same database.
+
+
+
+
+## Responsible node
+
+As stated above, upon creation, the cluster will choose a node that will be responsible for managing the subscription task on the server-side.
+Once chosen, that node will be the only node to manage the subscription.
+
+There is an enterprise license level feature that supports subscription (and any other ongoing task) failover between nodes,
+but eventually, as long as the originally assigned node is online, it will be the one to manage the data subscription task.
+
+Nevertheless, there is an option to manually decide which node will be responsible for managing the subscription task.
+Provide the tag of the node you wish to be responsible in the `mentorNode` property as follows:
+
+
+
+{`const name = await store.subscriptions.create(\{
+ query: "from Orders",
+ // Set a responsible node for the subscritpion task
+ mentorNode: "D"
+\});
+`}
+
+
+
+Manually setting the node can help choose a more suitable server based on factors such as resources, client proximity, or other considerations.
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/data-subscriptions/creation/_how-to-create-data-subscription-python.mdx b/versioned_docs/version-7.1/client-api/data-subscriptions/creation/_how-to-create-data-subscription-python.mdx
new file mode 100644
index 0000000000..c5ef376ecc
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/data-subscriptions/creation/_how-to-create-data-subscription-python.mdx
@@ -0,0 +1,99 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* A subscription task can be created in two ways:
+ * **From the client API**:
+ The client can create a subscription task on the server using this [creation API](../../../client-api/data-subscriptions/creation/api-overview.mdx#subscription-creation).
+ * **From the Studio**:
+ See [creating subscription task](../../../studio/database/tasks/ongoing-tasks/subscription-task.mdx) to learn how to create a subscription task on the server via the Studio.
+
+* Once created, its definition and progress will be stored on the cluster, and not in a single server.
+
+* Upon subscription creation, the cluster will choose a preferred node that will run the subscription
+ (unless the client has stated a responsible node).
+
+* From that point and on, clients that will connect to a server in order to consume the subscription will be redirected to the node mentioned above.
+
+* In this page:
+ * [Subscription creation](../../../client-api/data-subscriptions/creation/how-to-create-data-subscription.mdx#subscription-creation)
+ * [Subscription name](../../../client-api/data-subscriptions/creation/how-to-create-data-subscription.mdx#subscription-name)
+ * [Responsible node](../../../client-api/data-subscriptions/creation/how-to-create-data-subscription.mdx#responsible-node)
+
+
+## Subscription creation
+
+Data subscription is a batch processing mechanism that sends documents that meet specific criteria to connected clients.
+
+In order to create a data subscription, we first need to define the criteria.
+The basic requirement is to specify the collection from which the subscription will retrieve documents.
+However, the criteria can be a complex RQL-like expression defining JavaScript functions that filter documents and project their content.
+
+* The following is a simple subscription definition:
+
+
+
+{`# With the following subscription definition, the server will send ALL documents
+# from the 'Orders' collection to a client that connects to this subscription.
+name = store.subscriptions.create_for_class(Order)
+`}
+
+
+
+* For more complex subscription definitions, see these [examples](../../../client-api/data-subscriptions/creation/examples.mdx).
+
+* A subscription also can be modified after it has been created, see [update existing subscription](../../../client-api/data-subscriptions/creation/examples.mdx#update-existing-subscription).
+
+
+
+## Subscription name
+
+In order to consume a data subscription, a subscription name is required to identify it.
+If you don't specify a name when creating the subscription, the server will automatically generate a default name.
+However, you have the option to provide a custom name for the subscription.
+
+A dedicated name can be useful for use cases like dedicated, long-running batch processing mechanisms,
+where it'll be more comfortable to use a human-readable name in the code and even use the same name between different environments
+(as long as subscription creation is taken care of upfront).
+
+
+
+{`name = store.subscriptions.create_for_class(
+ Order, SubscriptionCreationOptions(name="OrdersProcessingSubscription")
+)
+`}
+
+
+
+
+Note that the subscription name is unique and it will not be possible to create two subscriptions with the same name in the same database.
+
+
+
+
+## Responsible node
+
+As stated above, upon creation, the cluster will choose a node that will be responsible for managing the subscription task on the server-side.
+Once chosen, that node will be the only node to manage the subscription.
+
+There is an enterprise license level feature that supports subscription (and any other ongoing task) failover between nodes,
+but eventually, as long as the originally assigned node is online, it will be the one to manage the data subscription task.
+
+Nevertheless, there is an option to manually decide which node will be responsible for managing the subscription task.
+Provide the tag of the node you wish to be responsible in the `mentor_node` property as follows:
+
+
+
+{`name = store.subscriptions.create_for_class(Order, SubscriptionCreationOptions(mentor_node="D"))
+`}
+
+
+
+Manually setting the node can help choose a more suitable server based on factors such as resources, client proximity, or other considerations.
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/data-subscriptions/creation/api-overview.mdx b/versioned_docs/version-7.1/client-api/data-subscriptions/creation/api-overview.mdx
new file mode 100644
index 0000000000..6b8b498d56
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/data-subscriptions/creation/api-overview.mdx
@@ -0,0 +1,42 @@
+---
+title: "Create and Update Subscription API"
+hide_table_of_contents: true
+sidebar_label: API Overview
+sidebar_position: 2
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import ApiOverviewCsharp from './_api-overview-csharp.mdx';
+import ApiOverviewPython from './_api-overview-python.mdx';
+import ApiOverviewNodejs from './_api-overview-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "python", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/data-subscriptions/creation/examples.mdx b/versioned_docs/version-7.1/client-api/data-subscriptions/creation/examples.mdx
new file mode 100644
index 0000000000..5955bff6bd
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/data-subscriptions/creation/examples.mdx
@@ -0,0 +1,42 @@
+---
+title: "Data Subscription Creation Examples"
+hide_table_of_contents: true
+sidebar_label: Examples
+sidebar_position: 1
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import ExamplesCsharp from './_examples-csharp.mdx';
+import ExamplesPython from './_examples-python.mdx';
+import ExamplesNodejs from './_examples-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "python", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/data-subscriptions/creation/how-to-create-data-subscription.mdx b/versioned_docs/version-7.1/client-api/data-subscriptions/creation/how-to-create-data-subscription.mdx
new file mode 100644
index 0000000000..c2dfa20da2
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/data-subscriptions/creation/how-to-create-data-subscription.mdx
@@ -0,0 +1,46 @@
+---
+title: "How to Create a Data Subscription"
+hide_table_of_contents: true
+sidebar_label: How to Create a Data Subscription
+sidebar_position: 0
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import HowToCreateDataSubscriptionCsharp from './_how-to-create-data-subscription-csharp.mdx';
+import HowToCreateDataSubscriptionJava from './_how-to-create-data-subscription-java.mdx';
+import HowToCreateDataSubscriptionPython from './_how-to-create-data-subscription-python.mdx';
+import HowToCreateDataSubscriptionNodejs from './_how-to-create-data-subscription-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "python", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/data-subscriptions/what-are-data-subscriptions.mdx b/versioned_docs/version-7.1/client-api/data-subscriptions/what-are-data-subscriptions.mdx
new file mode 100644
index 0000000000..108c0aa5a0
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/data-subscriptions/what-are-data-subscriptions.mdx
@@ -0,0 +1,43 @@
+---
+title: "Data Subscriptions"
+hide_table_of_contents: true
+sidebar_label: What are Data Subscriptions
+sidebar_position: 0
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import WhatAreDataSubscriptionsCsharp from './_what-are-data-subscriptions-csharp.mdx';
+import WhatAreDataSubscriptionsJava from './_what-are-data-subscriptions-java.mdx';
+import WhatAreDataSubscriptionsNodejs from './_what-are-data-subscriptions-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/document-identifiers/_category_.json b/versioned_docs/version-7.1/client-api/document-identifiers/_category_.json
new file mode 100644
index 0000000000..c99a4325c7
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/document-identifiers/_category_.json
@@ -0,0 +1,4 @@
+{
+ "position": 12,
+ "label": Document Identifiers,
+}
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/document-identifiers/_hilo-algorithm-csharp.mdx b/versioned_docs/version-7.1/client-api/document-identifiers/_hilo-algorithm-csharp.mdx
new file mode 100644
index 0000000000..419b87d639
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/document-identifiers/_hilo-algorithm-csharp.mdx
@@ -0,0 +1,317 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* The HiLo algorithm is the default method used by a RavenDB client to generate unique document IDs when storing a new document **without explicitly providing an `Id` value**.
+
+* It is an efficient solution used by the [session](../session/what-is-a-session-and-how-does-it-work.mdx) to generate the numeric part of the document identifier.
+ These numeric values are then combined with the collection name and server node-tag to create document identifiers such as `orders/10-A` or `products/93-B`.
+
+* An example of storing a document without specifying its `Id` is available in section [Autogenerated HiLo IDs](../../client-api/document-identifiers/working-with-document-identifiers.mdx#autogenerated-ids).
+ For an overview of all methods for generating unique IDs in RavenDB, see:
+ * [Document identifier generation](../../server/kb/document-identifier-generation.mdx)
+ * [Working with document identifiers](../../client-api/document-identifiers/working-with-document-identifiers.mdx).
+* In this page:
+ * [How the HiLo algorithm works in RavenDB](../../client-api/document-identifiers/hilo-algorithm.mdx#how-the-hilo-algorithm-works-in-ravendb)
+ * [Generating unique IDs efficiently](../../client-api/document-identifiers/hilo-algorithm.mdx#generating-unique-ids-efficiently)
+ * [Using HiLo documents](../../client-api/document-identifiers/hilo-algorithm.mdx#using-hilo-documents)
+ * [Returning HiLo ranges](../../client-api/document-identifiers/hilo-algorithm.mdx#returning-hilo-ranges)
+ * [Identity parts separator](../../client-api/document-identifiers/hilo-algorithm.mdx#identity-parts-separator)
+ * [Manual HiLo ID generation](../../client-api/document-identifiers/hilo-algorithm.mdx#manual-hilo-id-generation)
+ * [Get next ID - number only](../../client-api/document-identifiers/hilo-algorithm.mdx#get-next-id---number-only)
+ * [Get next ID - full document ID](../../client-api/document-identifiers/hilo-algorithm.mdx#get-next-id---full-document-id)
+ * [Overriding the HiLo algorithm](../../client-api/document-identifiers/hilo-algorithm.mdx#overriding-the-hilo-algorithm)
+
+
+
+## How the HiLo algorithm works in RavenDB
+
+### Generating unique IDs efficiently:
+
+**The client creates IDs from a range of unique numbers that it gets from the server.**
+The HiLo algorithm is efficient because the client can automatically generate unique document IDs
+without checking with the server or cluster each time a new document is created to ensure that the new ID is unique.
+The client receives from the server a range of numbers that are reserved for the client's usage.
+
+Each time a session creates a new document, the client assigns the new document an ID based on the next number from that range.
+For example, the first client to generate documents on a collection will receive the reserved numbers 1-32. The next one will reserve numbers 33-64, and so on.
+
+**The collection name and node-tag are added to the ID.**
+To further ensure that no two clients generate a document with the same ID, the collection name and the server node-tag are added to the ID.
+This is an added measure so that if two nodes B and C are working with the same range of numbers, the IDs generated will be `orders/54-B` and `orders/54-C`.
+This situation is rare because as long as the nodes can communicate when requesting a range of numbers, the clients will receive a different range of numbers.
+The node-tag is added to ensure unique IDs across the cluster.
+
+Thus, with minimal trips to the server, the client is able to determine to which collection an entity belongs
+and automatically assign it a number with a node-tag to ensure that the ID is unique across the cluster.
+### Using HiLo documents:
+
+**HiLo documents are used by the server to provide the next range of numbers.**
+To ensure that multiple clients can generate identifiers simultaneously without producing duplicates,
+a mechanism is needed to avoid duplication.
+
+This is handled by `Raven/HiLo/` documents, stored in the `@hilo` collection in the database.
+These documents are created and modified by the server and have a simple structure:
+
+
+
+{`\{
+ "Max": 32,
+ "@metadata": \{
+ "@collection": "@hilo"
+ \}
+\}
+`}
+
+
+
+The `Max` property means the maximum possible number that has been used by any client to create the identifier for a given collection. It is used as follows:
+
+1. The client asks the server for a range of numbers that it can use to create a document.
+ (32 is the initial capacity, but the range size can dynamically expand based on how frequently the client requests HiLo ranges).
+2. Then, the server checks the HiLo file to see what is the last "Max" number it sent to any client for this collection.
+3. The client will get the min and the max values it can use from the server (33 - 64 in our case).
+4. Then, the client creates a range object using the values received from the server.
+ This range object is then used to generate unique document IDs as needed.
+5. When the client reaches the max limit, it will repeat the process.
+
+
+
+## Returning HiLo ranges
+
+When the document store is disposed, the client sends the server the last value it used to create an identifier
+and the max value that was previously received from the server.
+
+If the max value on the server-side is equal to the max value of the client and
+the last used value by the client is smaller or equal to the max of the server-side,
+the server will update the `Max` value to the last used value by the client.
+
+
+
+{`var store = new DocumentStore();
+
+using (var session = store.OpenSession())
+\{
+ // Storing the first entity causes the client to receive the initial HiLo range (1-32)
+ session.Store(new Employee
+ \{
+ FirstName = "John",
+ LastName = "Doe"
+ \});
+
+ session.SaveChanges();
+ // The document ID will be: employees/1-A
+\}
+
+// Release the range when it is no longer relevant
+store.Dispose();
+`}
+
+
+
+`store.Dispose()` is used in this example to demonstrate that the range is released.
+In normal use, the `store` should only be disposed when the application is closed.
+
+After execution of the code above, the `Max` value of the Hilo document for the _Employees_ collection in the server will be 1.
+That's because the client used only one identifier from the range it got before we disposed the store.
+
+The next time that a client asks for a range of numbers from the server for this collection it will get (in our example) the range 2 - 33.
+
+
+
+{`var newStore = new DocumentStore();
+using (var session = newStore.OpenSession())
+\{
+ // Storing an entity after disposing the store in the previous example
+ // causes the client to receive the next HiLo range (2-33)
+ session.Store(new Employee
+ \{
+ FirstName = "Dave",
+ LastName = "Brown"
+ \});
+
+ session.SaveChanges();
+ // The document ID will be: employees/2-A
+\}
+`}
+
+
+
+
+
+
+
+#### Identity parts separator
+* By default, document IDs created by the server use the character `/` to separate their components.
+
+* This separator can be customized to any other character, except `|`, by setting the [IdentityPartsSeparator](../../client-api/configuration/conventions.mdx#identitypartsseparator) convention.
+
+
+
+## Manual HiLo ID generation
+
+* **Automatic generation**:
+ When the session stores a new document with the `Id` set to `null`, RavenDB's default HiLo ID generator automatically generates the ID for the document.
+ This document ID includes the collection name, a unique number, and the server node-tag, ensuring the ID is unique across the database.
+
+* **Manual generation**:
+ We provide you with the option of manually retrieving the next ID from the HiLo range currently reserved for the client without having to store the document first.
+ You can retrieve either the next number portion or the full document ID and then use it when storing the document, as explained below:
+ * [Get next ID - number only](../../client-api/document-identifiers/hilo-algorithm.mdx#get-next-id---number-only)
+ * [Get next ID - full document ID](../../client-api/document-identifiers/hilo-algorithm.mdx#get-next-id---full-document-id)
+### Get next ID - number only
+
+You can take advantage of the HiLo algorithm and create documents with your own customized ID that is based on the next HiLo ID number provided by the client.
+
+
+
+* Manually getting the next HiLo ID number only provides **the next number in the HiLo range**,
+ it does Not include the collection name and the server node-tag.
+* Therefore, when manually specifying your own IDs this way,
+ you are responsible for ensuring that the IDs are unique within the database.
+
+
+
+#### Syntax:
+
+Either one of the following overloads will return the next available ID from the HiLo numbers reserved for the client.
+The returned ID number can then be used when storing a new document.
+
+
+
+{`Task GenerateNextIdForAsync(string database, object entity);
+
+Task GenerateNextIdForAsync(string database, Type type);
+
+Task GenerateNextIdForAsync(string database, string collectionName);
+`}
+
+
+
+| Parameter | Type | Description |
+|--------------------|----------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **database** | `string` | The database for which to get the ID. `null` will get the ID for the default database set in the document store. |
+| **collectionName** | `string` | The collection for which to get the ID. |
+| **entity** | `object` | An instance of the specified collection. |
+| **type** | `Type` | The collection entity type. It is usually the singular of the collection name. For example, collection = "Orders", then type = "Order". |
+
+| Return value | Type | Description |
+|---------------|--------|------------------------------------------------------------------------|
+| **nextId** | `long` | The next available number from the HiLo range reserved for the client. |
+
+#### Example:
+
+The following example shows how to get the next ID number from the HiLo range reserved for the client.
+The ID provided is the next unique number without the node tag and the collection.
+This ID is then used to create and store a new document.
+
+Calling `GenerateNextIdForAsync` ensures minimal calls to the server,
+as the ID is generated by the client from the reserved range of numbers.
+
+
+
+{`using (var session = store.OpenSession())
+\{
+ // Use any overload to get the next id:
+ // (Note how the id increases with each call)
+ // ==========================================
+
+ var nextId = await store.HiLoIdGenerator.GenerateNextIdForAsync(null, "Products");
+ // nextId = 1
+
+ nextId = await store.HiLoIdGenerator.GenerateNextIdForAsync(null, new Product());
+ // nextId = 2
+
+ nextId = await store.HiLoIdGenerator.GenerateNextIdForAsync(null, typeof(Product));
+ // nextId = 3
+
+ // Now you can create a new document with the nextId received
+ // ==========================================================
+
+ var product = new Product
+ \{
+ Id = "MyCustomId/" + nextId.ToString()
+ \};
+
+ // Store the new document
+ // The document ID will be: "MyCustomId/3"
+ session.Store(product);
+ session.SaveChanges();
+\}
+`}
+
+
+
+
+
+##### Unique IDs across the cluster
+
+This manual generator sample is sufficient if you are using only one server.
+If you want to ensure unique IDs across the cluster, we recommend using [our default HiLo generator](../../client-api/document-identifiers/working-with-document-identifiers.mdx#autogenerated-ids).
+
+You may also consider using the [cluster-wide Identities generator](../../client-api/document-identifiers/working-with-document-identifiers.mdx#identities), which guarantees a unique ID across the cluster.
+It is more costly than the default HiLo generator because it requires a request from the server for _each ID_,
+and the server needs to do a Raft consensus check to ensure that the other nodes in the cluster agree that the ID is unique, then returns the ID to the client.
+
+
+### Get next ID - full document ID
+
+You can request to get the next full document ID from the default HiLo generator without having to store the document first.
+
+#### Syntax:
+
+
+
+{`Task GenerateDocumentIdAsync(string database, object entity);
+`}
+
+
+
+#### Example:
+
+The latest HiLo ID number generated in the example above was `3`.
+Therefore, when running the following example immediately after,
+the consecutive number `4` is retrieved and incorporated into the full document ID (`products/4-A`) that is returned by `GenerateDocumentIdAsync`.
+
+
+
+{`using (var session = store.OpenSession())
+\{
+ var nextFullId = await store.HiLoIdGenerator.GenerateDocumentIdAsync(null, "Products");
+ // nextFullId = "products/4-A"
+
+ // You can now use the nextFullId and customize the document ID as you wish:
+ var product = new Product
+ \{
+ Id = "MyCustomId/" + nextFullId
+ \};
+
+ session.Store(product);
+ session.SaveChanges();
+\}
+`}
+
+
+
+
+
+## Overriding the HiLo algorithm
+
+* RavenDB's default HiLo generator is managed by the `HiLoIdGenerator` property in your _DocumentStore_ object.
+
+* If needed, you can override this default ID generation behavior by setting the [AsyncDocumentIdGenerator](../../client-api/configuration/conventions.mdx#asyncdocumentidgenerator) convention with your own implementation.
+
+* Once you configure your custom behavior through this convention:
+
+ * Your customized ID generation will be applied whenever you store a document without explicitly specifying an `Id`.
+
+ * Attempting to call [GenerateNextIdForAsync](../../client-api/document-identifiers/hilo-algorithm.mdx#get-next-id---number-only) or
+ [GenerateDocumentIdAsync](../../client-api/document-identifiers/hilo-algorithm.mdx#get-next-id---full-document-id) via the store's `HiLoIdGenerator`
+ will throw an exception.
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/document-identifiers/_working-with-document-identifiers-csharp.mdx b/versioned_docs/version-7.1/client-api/document-identifiers/_working-with-document-identifiers-csharp.mdx
new file mode 100644
index 0000000000..ad9832bf0d
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/document-identifiers/_working-with-document-identifiers-csharp.mdx
@@ -0,0 +1,372 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Each document in a RavenDB database has a unique string associated with it, called an **identifier**.
+ Every entity that you store, using either a [session](../../client-api/session/what-is-a-session-and-how-does-it-work.mdx)
+ or a [put document command](../../client-api/commands/documents/put.mdx), is assigned such an identifier.
+
+* RavenDB supports [several options](../../server/kb/document-identifier-generation.mdx) of storing a document and assigning
+ it an identifier.
+ The client can directly utilize these options.
+
+* You can always handle the identifier generation using your knowledge of the entity type and the identifier number provided
+ by the HiLo algorithm. As described below, this is how the identifier is generated by the session.
+
+* In this page:
+ * [Session Usage](../../client-api/document-identifiers/working-with-document-identifiers.mdx#session-usage)
+ * [Autogenerated IDs](../../client-api/document-identifiers/working-with-document-identifiers.mdx#autogenerated-ids)
+ * [Custom / Semantic IDs](../../client-api/document-identifiers/working-with-document-identifiers.mdx#custom-/-semantic-ids)
+ * [Server-side generated IDs](../../client-api/document-identifiers/working-with-document-identifiers.mdx#server-side-generated-ids)
+ * [Identities](../../client-api/document-identifiers/working-with-document-identifiers.mdx#identities)
+ * [Setting Identity IDs Using Commands and Operations](../../client-api/document-identifiers/working-with-document-identifiers.mdx#setting-identity-ids-using-commands-and-operations)
+ * [Using Commands](../../client-api/document-identifiers/working-with-document-identifiers.mdx#using-commands)
+ * [Using Operations](../../client-api/document-identifiers/working-with-document-identifiers.mdx#using-operations)
+
+
+## Session Usage
+
+If you choose to use the session, you don't have to pay any special attention to the identifiers of the stored entities.
+The session will take care of it by generating the identifiers automatically.
+
+It utilizes [conventions](../../client-api/configuration/conventions.mdx) and HiLo algorithms to produce the identifiers.
+Everything is handled by the session's mechanism and is transparent for the user.
+However, you can influence the identifier generation strategy by overwriting
+[the identifier generation conventions](../../client-api/configuration/identifier-generation/global.mdx).
+
+In this article we are going to consider the behavior in accordance with the default conventions.
+
+
+Identifiers of documents in RavenDB database are always strings, so take this into consideration when you model your entities.
+
+
+
+
+## Autogenerated IDs
+
+To figure out which property (or field) holds the entity's identifier, the convention `Conventions.
+FindIdentityProperty` is called.
+By default, it looks for the property or the field named `Id` (case sensitive). However, this property can
+have a `null` value or even not be present at all. Then the automatic identifier generation strategy is performed.
+The default convention is that entities get the identifiers in the following format `collection/number-tag`.
+RavenDB client first determines the name of [the collection](../../client-api/faq/what-is-a-collection.mdx) that
+the entity belongs to, then contacts the server to retrieve a numeric range of values. These values
+can be used as the `number` part.
+The range of available numbers is calculated by using the `HiLo` algorithm and is tracked per collection.
+The current maximum value in ranges is stored in documents `Raven/Hilo/collection`.
+
+Let's see the example.
+
+
+
+{`var order = new Order
+\{
+ Id = null // value not provided
+\};
+
+session.Store(order);
+`}
+
+
+
+What will be the identifier of this order? You can check it by calling:
+
+
+
+{`var orderId = session.Advanced.GetDocumentId(order); // "orders/1-A"
+`}
+
+
+
+If this is the first `Order` entity in your database, then it will return `orders/1-A`. How does the identifier
+generation process proceed? The RavenDB client determines the collection name as `orders`
+(by default it is the plural form of the entity name).
+Then it asks the server to give him the ID's range he can use (the first available range is 1 - 32). The server will
+handle the Raven/Hilo/orders document.
+The next available identifier value (always incrementing number) from the given range is `1` so its combination with
+the collection name and the node tag gives the result `orders/1-A`.
+
+The next attempt to store another `Order` object within the same session will result in creating the `orders/2-A`
+identifier. However, this time asking the server about the possible range will not be necessary because the in-memory range
+(1 - 32) is enough, so simply the next number will be added as the identifier suffix.
+
+
+
+Each (in code) document store _instance_ handles the generation of the identifier value numeric range. The database
+stores the last requested number while the document store _instances_ request ranges and caches the returned range of
+available identities.
+
+The database has a single document (per collection) which stores the last identifier value requested by a document
+store instance.
+
+E.g. the document `Raven/HiLo/accounts` has the following value
+
+
+{`\{
+ "Max": "4000",
+ "@metadata": \{
+ "@collection": "@hilo"
+ \}
+\}
+`}
+
+
+
+then the next range will be `4001 - 4032`, if 32 was range size (by default, it's 32).
+
+The number of sessions per document store instance plays no part in identifiers value generation. When the store is
+disposed of, the client sends the server the last value it used and the max value it got from the server.
+Then the server will write it in the HiLo document (If the Max number is equal to the max number from the client
+and bigger or equal to the last used value by the client)
+
+
+If you intend to skip the identifier creation strategy that relies on the collection and HiLo value pair,
+you can allow RavenDB to assign the Guid identifier to the stored document. Then, you have to provide the
+`string.Empty` as the value of the `Id` property:
+
+
+
+{`var orderEmptyId = new Order
+\{
+ Id = string.Empty // database will create a GUID value for it
+\};
+
+session.Store(orderEmptyId);
+
+session.SaveChanges();
+
+var guidId = session.Advanced.GetDocumentId(orderEmptyId); // "bc151542-8fa7-45ac-bc04-509b343a8720"
+`}
+
+
+
+This time the check for the document ID is called after `SaveChanges` because only then we go to the server while
+the entity's identifier is generated there.
+
+
+
+## Custom / Semantic IDs
+
+The session also supports the option to store the entity and explicitly tell under what identifier it should be stored
+in the database. To do this, you can either set the `Id` property of the object:
+
+
+
+{`var product = new Product
+\{
+ Id = "products/ravendb",
+ Name = "RavenDB"
+\};
+
+session.Store(product);
+`}
+
+
+
+or use the following `Store` method overload:
+
+
+
+{`session.Store(new Product
+\{
+ Name = "RavenDB"
+\}, "products/ravendb");
+`}
+
+
+
+
+
+## Server-side generated IDs
+
+RavenDB also supports the notion of the identifier without the usage of the HiLo. By creating a string ID property
+in your entity and setting it to a value ending with a slash (`/`), you can ask RavenDB to assign a document ID to
+a new document when it is saved.
+
+
+
+{`session.Store(new Company
+\{
+ Id = "companies/"
+\});
+
+session.SaveChanges();
+`}
+
+
+
+Using `/` at the end of the ID will create an ID at the server-side by appending a numeric value and the node tag.
+After executing the code above we will get from the server ID something that looks like `companies/000000000000000027-A`.
+
+
+Be aware that the only guarantee for the numeric part is that it will always be increasing only within the same node.
+
+
+
+
+## Identities
+
+If you need IDs to increment across the cluster, you can use the **Identity** option.
+To do so you need to use a pipe (`|`) as a suffix to the provided ID. This will instruct RavenDB
+to create the ID when the document is saved, using a special cluster-wide integer value that is
+continuously incremented.
+
+
+Using an identity guarantees that IDs will be incremental, but does **not** guarantee
+that there wouldn't be gaps in the sequence.
+The IDs sequence can therefore be, for example, `companies/1`, `companies/2`, `companies/4`..
+This is because -
+
+ * Documents could have been deleted.
+ * A failed transaction still increments the identity value, thus causing a gap in the sequence.
+
+
+
+
+{`session.Store(new Company
+\{
+ Id = "companies|"
+\});
+
+session.SaveChanges();
+`}
+
+
+
+After the execution of the code above, the ID will be `companies/1`.
+We do not add the node tag to the end of the ID, because the added number is unique in the cluster.
+Identities continuously increase, so running the above code again will generate `companies/2`, and so on.
+
+Note that we used `companies` as the prefix just to follow the RavenDB convention.
+Nothing prevents you from providing a different prefix, unrelated to the collection name.
+
+
+Be aware that using the pipe symbol (`|`) as a prefix to the ID generates a call to the cluster
+and might affect performance.
+
+
+
+
+* **Identity Parts Separator**
+By default, document IDs created by the server use `/` to separate their components.
+This separator can be changed to any other character except `|`, in the
+[Global Identifier Generation Conventions](../../client-api/configuration/identifier-generation/global.mdx#identitypartsseparator).
+See [Setting Identity IDs Using Commands and Operations](../../client-api/document-identifiers/working-with-document-identifiers.mdx#setting-identity-ids-using-commands-and-operations)
+for details.
+
+* **Concurrent writes**
+ The identities are generated and updated on the server side in the atomic fashion.
+ This means you can safely use this approach in the concurrent writes scenario.
+
+
+
+
+## Setting Identity IDs Using Commands and Operations
+
+The commands API gives you full freedom in selecting the identifier generation strategy.
+
+* As in the case of a session, you can either ask the server to provide the identifier or provide the identifier of the
+ stored entity manually.
+* You can also indicate if the identifier that you are passing needs to have the identifier suffix added.
+ Do this by ending the ID with `/` or `|` as demonstrated below.
+
+
+
+{`var doc = new DynamicJsonValue
+\{
+ ["Name"] = "My RavenDB"
+\};
+
+var blittableDoc = session.Advanced.JsonConverter.ToBlittable(doc, null);
+
+var command = new PutDocumentCommand("products/", null, blittableDoc);
+
+session.Advanced.RequestExecutor.Execute(command, session.Advanced.Context);
+
+var identityId = command.Result.Id; // "products/0000000000000000001-A if using only '/' in the seesion"
+
+var commandWithPipe = new PutDocumentCommand("products|", null, blittableDoc);
+session.Advanced.RequestExecutor.Execute(commandWithPipe, session.Advanced.Context);
+
+var identityPipeId = command.Result.Id; // "products/1"
+`}
+
+
+### Using Commands
+
+* **Get the next available identity from the server**
+ You can set an identifier by your client, while still relying on the server to generate the identifier for you.
+ It is done using the `NextIdentityForCommand` command s shown below, with the prefix for which you want the server
+ to provide the next available identifier.
+
+
+
+{`var command = new NextIdentityForCommand("products");
+session.Advanced.RequestExecutor.Execute(command, session.Advanced.Context);
+var identity = command.Result;
+
+var doc = new DynamicJsonValue
+\{
+ ["Name"] = "My RavenDB"
+\};
+
+var blittableDoc = session.Advanced.JsonConverter.ToBlittable(doc, null);
+
+var putCommand = new PutDocumentCommand("products/" + identity, null, blittableDoc);
+
+session.Advanced.RequestExecutor.Execute(putCommand, session.Advanced.Context);
+`}
+
+
+
+ Note that such construction requires approaching the server twice in order to add a single document.
+ You need to call `session.Advanced.RequestExecutor.Execute(command, session.Advanced.Context)` for every
+ entity that you want to store.
+
+ **Asking** the server about the next identifier results in **increasing this value** on the server-side.
+
+ Please note that you **cannot** get the next available identifier and increment its value locally to create
+ the identifiers of a whole collection of the same prefix, because you may accidentally overwrite documents or
+ conflicts may occur if another client puts documents using the identifier mechanism.
+
+* **Provide an identity of your choice**
+ You can choose an identifier's value yourself, using the `SeedIdentityForCommand` command.
+
+
+{`var seedIdentityCommand = new SeedIdentityForCommand("products", 1994);
+`}
+
+
+### Using Operations
+
+RavenDB ver. 4.2 and higher provides high-level [operations](../../client-api/operations/what-are-operations.mdx#operations-what-are-the-operations)
+that you may set IDs with, in addition to the
+low-level [commands](../../client-api/document-identifiers/working-with-document-identifiers.mdx#using-commands)
+we have described above.
+There is no operational difference between using operations and commands, since the high-level operations actually
+execute low-level commands. However, using operations may produce a clearer, more concise code.
+
+* Use the `NextIdentityForOperation` operation to choose the next value suggested by the server as an ID.
+ It is identical to using the `NextIdentityForCommand` command.
+
+
+{`store.Maintenance.Send(new NextIdentityForOperation("products"));
+`}
+
+
+
+* Use the `SeedIdentityForOperation` operation to choose your ID's value yourself.
+ It is identical to using the `SeedIdentityForCommand` command.
+
+
+{`var seedIdentityOperation = store.Maintenance.Send(new SeedIdentityForOperation("products", 1994));
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/document-identifiers/hilo-algorithm.mdx b/versioned_docs/version-7.1/client-api/document-identifiers/hilo-algorithm.mdx
new file mode 100644
index 0000000000..539549d5b4
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/document-identifiers/hilo-algorithm.mdx
@@ -0,0 +1,44 @@
+---
+title: "HiLo Algorithm"
+hide_table_of_contents: true
+sidebar_label: HiLo algorithm
+sidebar_position: 1
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import HiloAlgorithmCsharp from './_hilo-algorithm-csharp.mdx';
+
+export const supportedLanguages = ["csharp"];
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/document-identifiers/working-with-document-identifiers.mdx b/versioned_docs/version-7.1/client-api/document-identifiers/working-with-document-identifiers.mdx
new file mode 100644
index 0000000000..a18697390f
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/document-identifiers/working-with-document-identifiers.mdx
@@ -0,0 +1,38 @@
+---
+title: "Working with Document Identifiers"
+hide_table_of_contents: true
+sidebar_label: Working with Document Identifiers
+sidebar_position: 0
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import WorkingWithDocumentIdentifiersCsharp from './_working-with-document-identifiers-csharp.mdx';
+
+export const supportedLanguages = ["csharp"];
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/faq/_category_.json b/versioned_docs/version-7.1/client-api/faq/_category_.json
new file mode 100644
index 0000000000..6f963e01b3
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/faq/_category_.json
@@ -0,0 +1,4 @@
+{
+ "position": 18,
+ "label": FAQ,
+}
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/faq/assets/what-is-a-collection.png b/versioned_docs/version-7.1/client-api/faq/assets/what-is-a-collection.png
new file mode 100644
index 0000000000..2f3b4b610d
Binary files /dev/null and b/versioned_docs/version-7.1/client-api/faq/assets/what-is-a-collection.png differ
diff --git a/versioned_docs/version-7.1/client-api/faq/backward-compatibility.mdx b/versioned_docs/version-7.1/client-api/faq/backward-compatibility.mdx
new file mode 100644
index 0000000000..e27a0dd69b
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/faq/backward-compatibility.mdx
@@ -0,0 +1,92 @@
+---
+title: "FAQ: Backward Compatibility"
+hide_table_of_contents: true
+sidebar_label: Backward Compatibility
+sidebar_position: 2
+---
+
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+# FAQ: Backward Compatibility
+
+
+* RavenDB is released in **Major** versions like 4.0 and 5.0, which are
+ complemented over time by **Minor** versions like 5.1 and 5.2.
+
+* This article explains which major and minor RavenDB Clients and Servers are
+ compatible, and advises regarding upgrading.
+
+* In this page:
+ * [Client/Server Compatibility](../../client-api/faq/backward-compatibility.mdx#client/server-compatibility)
+ * [Compatibility - Up to RavenDB 4.1](../../client-api/faq/backward-compatibility.mdx#compatibility---up-to-ravendb-41)
+ * [Compatibility - RavenDB 4.2 and Higher](../../client-api/faq/backward-compatibility.mdx#compatibility---ravendb-42-and-higher)
+ * [Upgrading](../../client-api/faq/backward-compatibility.mdx#upgrading)
+ * [Upgrading - Up to RavenDB 4.1](../../client-api/faq/backward-compatibility.mdx#upgrading---up-to-ravendb-41)
+ * [Upgrading - RavenDB 4.2 and Higher](../../client-api/faq/backward-compatibility.mdx#upgrading---ravendb-42-and-higher)
+ * [Upgrading Order](../../client-api/faq/backward-compatibility.mdx#upgrading-order)
+
+
+
+## Client/Server Compatibility
+
+### Compatibility - Up to RavenDB 4.1
+RavenDB **Clients** of versions lower than 4.2 are compatible with **Servers
+of the same Major version** (3.x Clients with 3.x Servers, 4.x Clients
+with 4.x Servers), and a **Minor version the same as theirs or higher**.
+E.g. -
+
+* `Client 3.0` is **compatible** with `Server 3.0`, because they are of the exact
+ same version.
+* `Client 4.0` is **compatible** with `Server 4.1` because they are of the same
+ major version and the server is of a higher minor version.
+* `Client 4.1.7` is **compatible** with `Server 4.1.6` because
+ though the client is a little newer, the server is of the same
+ minor version (1) as the client.
+* `Client 3.0` is **not** compatible with `Server 4.0` because the
+ server is of a different major version.
+* `Client 4.5` is **not** compatible with `Server 4.0` because the
+ server is of a lower minor version.
+
+
+
+* A server that receives an erroneous client request, will check
+ whether the client version is supported.
+* If the client version is not supported, an exception will be thrown:
+ **`RavenDB does not support interaction between Client API major version 3 and Server version 4
+ when major version does not match.`**
+
+
+### Compatibility - RavenDB 4.2 and Higher
+Starting with version 4.2, RavenDB clients are compatible with
+any server of their own version **and higher**.
+E.g. -
+
+* `Client 4.2` is **compatible** with `Server 4.2`, `Server 4.5`,
+ `Server 5.2`, and any other server of a higher version.
+
+
+
+## Upgrading
+
+### Upgrading - Up to RavenDB 4.1
+Upgrading RavenDB from a version earlier than 4.2 to a higher major version,
+requires the upgrading of the server and all clients in lockstep.
+Please visit our [migration introduction](../../migration/client-api/introduction.mdx)
+page to learn more about migrating from early versions.
+### Upgrading - RavenDB 4.2 and Higher
+When RavenDB is upgraded from version 4.2 and higher, e.g. from 4.2 to 5.3,
+it is recommended - but not mandatory - to upgrade the clients, since they
+are [compatible with servers of versions higher than theirs](../../client-api/faq/backward-compatibility.mdx#ravendb-42-and-higher-compatibility).
+### Upgrading Order
+To properly upgrade your applications and server, we advise you to upgrade the server first,
+then the clients.
+This way, your applications will keep working as before and you can update
+them one by one if needed.
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/faq/transaction-support.mdx b/versioned_docs/version-7.1/client-api/faq/transaction-support.mdx
new file mode 100644
index 0000000000..feb3ea203b
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/faq/transaction-support.mdx
@@ -0,0 +1,268 @@
+---
+title: "FAQ: Transaction Support in RavenDB"
+hide_table_of_contents: true
+sidebar_label: Transaction Support
+sidebar_position: 1
+---
+
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+# FAQ: Transaction Support in RavenDB
+
+
+* In this page:
+ * [ACID storage](../../client-api/faq/transaction-support.mdx#acid-storage)
+ * [What is and what isn't a transaction](../../client-api/faq/transaction-support.mdx#what-is-and-what-isn)
+ * [Working with transactions in RavenDB ](../../client-api/faq/transaction-support.mdx#working-with-transactions-in-ravendb)
+ * [Single-node model](../../client-api/faq/transaction-support.mdx#single-node-model)
+ * [Multi-master model](../../client-api/faq/transaction-support.mdx#multi-master-model)
+ * [Cluster-wide transactions](../../client-api/faq/transaction-support.mdx#cluster-wide-transactions)
+ * [ACID for document operations](../../client-api/faq/transaction-support.mdx#acid-for-document-operations)
+ * [BASE for query operations](../../client-api/faq/transaction-support.mdx#base-for-query-operations)
+
+
+## ACID storage
+
+All storage operations performed in RavenDB are fully ACID-compliant (Atomicity, Consistency, Isolation, Durability),
+this is because internally RavenDB uses a storage engine called [Voron](../../server/storage/storage-engine.mdx), built specifically for RavenDB's usage,
+which guarantees all ACID properties, whether executed on document, index or local cluster data.
+
+
+
+## What is and what isn't a transaction
+
+* A transaction represents a set of operations executed against a database as a single, atomic, and isolated unit.
+
+* In RavenDB, a transaction (read or write) is limited to the scope of a __single__ HTTP request.
+
+* The terms "ACID transaction" or "transaction" refer to the storage engine transactions.
+ Whenever a database receives an operation or batch of operations in a request, it will wrap it in a "storage transaction",
+ execute the operations and commit the transaction.
+
+* RavenDB ensures that for a single HTTP request, all the operations in that request are transactional.
+ It employs _Serializable_ isolation for write operations and _Snapshot_ isolation for read operations.
+
+* RavenDB doesn't support a transaction spanning __multiple__ HTTP requests. Interactive transactions are not implemented by RavenDB
+ (see [below](../../client-api/faq/transaction-support.mdx#no-support-for-interactive-transactions) for the reasoning behind this decision).
+ RavenDB offers [optimistic concurrency](../../client-api/session/configuration/how-to-enable-optimistic-concurrency.mdx) feature to achieve similar behavior.
+
+* The [Client API Session](../../client-api/session/what-is-a-session-and-how-does-it-work.mdx) is a pure Client API object and does not represent a transaction,
+ thus it is not meant to provide interactive transaction semantics.
+ It is entirely managed on the client side without maintaining a corresponding session state on the server.
+ The server does not reference or keep track of the session context.
+
+
+
+## Working with transactions in RavenDB
+
+### Single-node model
+
+Transactional behavior with RavenDB is divided into two modes:
+
+* __Single requests__:
+In this mode, a user can perform all requested operations (read and/or write) in a single request.
+
+ * __Multiple writes__:
+ A batch of multiple write operations will be executed atomically in a single transaction when calling [SaveChanges()](../../client-api/session/saving-changes.mdx).
+ Multiple operations can also be executed in a single transaction using the low-level [SingleNodeBatchCommand](../../client-api/commands/batches/how-to-send-multiple-commands-using-a-batch.mdx).
+ In both cases, a single HTTP request is sent to the database.
+
+ * __Multiple reads & writes__:
+ Performing interleaving reads and writes or conditional execution can be achieved by [running a patching script](../../client-api/operations/patching/single-document.mdx).
+ In the script you can read documents, make decisions based on their content and update or put document(s) within the scope of a single transaction.
+ If you only need to modify a document in a transaction, [JSON Patch syntax](../../client-api/operations/patching/json-patch-syntax.mdx) allows you to do that.
+
+* __Multiple requests__:
+ RavenDB does not support a single transaction that spans all requested operations within multiple requests.
+ Instead, users are expected to utilize [optimistic concurrency](../../client-api/session/configuration/how-to-enable-optimistic-concurrency.mdx) to achieve similar behavior.
+ Your changes will get committed only if no one else has changed the data you are modifying in the meantime.
+
+#### No support for interactive transactions
+
+RavenDB client uses HTTP to communicate with the RavenDB server.
+It means that RavenDB doesn't allow you to open a transaction on the server side, make multiple operations over a network connection, and then commit or roll it back.
+This model, known as the interactive transactions model, is incredibly costly. Both in terms of engine complexity and the impact on the overall performance of the system.
+
+
+
+In [one study](http://nms.csail.mit.edu/~stavros/pubs/OLTP_sigmod08.pdf) the cost of managing the transaction state across multiple network operations was measured at over 40% of the total system performance.
+This is because the server needs to maintain locks and state across potentially very large time frames.
+
+
+
+RavenDB's approach differs from the classical SQL model, which relies on interactive transactions. Instead, RavenDB uses the batch transaction model. It allows us to provide the same capabilities as interactive transactions in
+conjunction with [optimistic concurrency](../../client-api/session/configuration/how-to-enable-optimistic-concurrency.mdx), with much better performance.
+
+Key to that design decision is our ability to provide similar guarantees about the state of your data without experiencing the overhead of interactive transactions.
+
+#### Batch transaction model
+
+RavenDB uses the batch transaction model, where a RavenDB client submits all the operations to be run in a single transaction in one network call.
+This allows the storage engine inside RavenDB to avoid holding locks for an extended period of time and gives plenty of room to optimize the performance.
+
+This decision is based on the typical interaction pattern by which RavenDB is used.
+RavenDB serves as a transactional system of record for business applications, where the common workflow involves presenting data to users,
+allowing them to make modifications, and subsequently save these changes.
+A single request loads the data which is then presented to the user.
+After a period of contemplation or "think time," the user submits a set of updates, which are then saved to the database.
+This model fits the batch transaction model a lot more closely than the interactive one, as there's no necessity to keep a transaction open during the user's "think time."
+
+All changes that are sent via _SaveChanges_ are persisted in a single unit.
+If you modify documents concurrently and you want to assure they won't by affected by the lost update problem,
+then you must enable [optimistic concurrency](../../client-api/session/configuration/how-to-enable-optimistic-concurrency.mdx) (turned off by default) across all sessions that modify those documents.
+
+<hr/>
+
+### Multi-master model
+
+RavenDB employs the multi-master model, allowing writes to be made to any node in the cluster.
+These writes are then propagated asynchronously to the other nodes via [replication](../../server/clustering/replication/replication-overview.mdx).
+
+The interaction of transactions and distributed work is anything but trivial. Let's start from the obvious problem:
+
+* RavenDB allows you to perform concurrent write operations on multiple nodes.
+* RavenDB explicitly allows you to write to a node that was partitioned from the rest of the network.
+
+Taken together, this violates the [CAP theorem](https://en.wikipedia.org/wiki/CAP_theorem)
+which states that a system can only provide 2 out of 3 guarantees around consistency, availability, and partition tolerance.
+
+RavenDB's answer to distributed transactional work is nuanced and was designed to give you as the user the choice
+so you can utilize RavenDB for each of your scenarios:
+
+* Single-node operations are available and partition tolerant (AP) but cannot meet the consistency guarantee.
+* If you need to guarantee uniqueness or replicate the data for redundancy across more than one node,
+ you can choose to have higher consistency at the cost of availability (CP).
+
+When running in a multi-node setup, RavenDB still uses transactions. However, they are single-node transactions.
+That means that the set of changes that you write in a transaction is committed only to the node you are writing to.
+It will then asynchronously replicate to the other nodes.
+To achieve consistency across the entire cluster please refer to the [Cluster-wide transactions](../../client-api/faq/transaction-support.mdx#cluster-wide-transactions) section below.
+
+#### Replication conflicts
+
+This is an important observation because you can get into situations where two clients wrote (even with [optimistic concurrency](../../client-api/session/configuration/how-to-enable-optimistic-concurrency.mdx) turned on)
+to the same document and both of them committed successfully (each one to a separate node).
+RavenDB attempts to minimize this situation by designating a [preferred node](../../client-api/configuration/load-balance/overview.mdx#the-preferred-node) for writes for each database,
+but since writing to the preferred node isn't guaranteed, this might not alleviate the issue.
+
+In such a case, the data will replicate across the cluster, and RavenDB will detect that there were [conflicting](../../server/clustering/replication/replication-conflicts.mdx) modifications to the document.
+It will then apply the [conflict resolution](../../studio/database/settings/conflict-resolution.mdx) strategy that you choose.
+That can include selecting a manual resolution, running a [resolution script](../../server/clustering/replication/replication-conflicts.mdx#conflict-resolution-script) to reconcile the conflicting versions,
+or simply selecting the latest version. You are in control of this behavior.
+
+This behavior was influenced by the [Dynamo paper](https://dl.acm.org/doi/10.1145/1323293.1294281) which emphasizes the importance of writes.
+The assumption is that if you are writing data to the database, you expect it to be persisted.
+
+RavenDB will do its utmost to provide that to you, allowing you to write to the database even in the case of partitions or partial failure states.
+However, handling replication conflicts is a consideration you have to take into account when using single-node transactions in RavenDB (see below for running a [cluster-wide transaction](../../client-api/faq/transaction-support.mdx#cluster-wide-transactions)).
+
+
+
+If no conflict resolution script is defined for a collection, then by default RavenDB resolves the conflict using the latest version based on the `@last-modified` property of conflicted versions of the document.
+That might result in the lost update anomaly.
+
+If you care about avoiding lost updates, you need to ensure you have the conflict resolution script defined accordingly or use a [cluster-wide transaction](../../client-api/faq/transaction-support.mdx#cluster-wide-transactions).
+
+
+
+#### Replication & transaction boundary
+
+The following is an important aspect to RavenDB's transactional behavior with regards to asynchronous replication.
+
+When replicating modifications to another server, RavenDB will ensure that the [transaction boundaries](../../server/clustering/replication/replication-overview.mdx#replication--transaction-boundary) are maintained.
+If there are several document modifications in the same transaction they will be sent in the same replication batch, keeping the transaction boundary on the destination as well.
+
+However, a special attention is needed when a document is modified in two separate transactions but the replication of the first transaction has not occurred yet.
+Read more about that in [How revisions replication help data consistency](../../server/clustering/replication/replication-overview.mdx#how-revisions-replication-help-data-consistency).
+
+<hr/>
+
+### Cluster-wide transactions
+
+RavenDB also supports [cluster-wide transactions](../../client-api/session/cluster-transaction/overview.mdx).
+This feature modifies the way RavenDB commits a transaction, and it is meant to address scenarios where you prefer to get a failure if the transaction cannot be persisted to a majority of the nodes in the cluster.
+In other words, this feature is for scenarios where you want to favor consistency over availability.
+
+For cluster-wide transactions, RavenDB uses the [Raft](../../server/clustering/rachis/what-is-rachis.mdx#what-is-raft-?) protocol.
+This protocol ensures that the transaction is acknowledged by a majority of the nodes in the cluster and once committed, the changes will be visible on any node that you'll use henceforth.
+
+Similar to single-node transactions, RavenDB requires that you submit the cluster-wide transaction as a single request of all the changes you want to commit to the database.
+
+Cluster-wide transactions have the notion of [atomic guards](../../client-api/session/cluster-transaction/atomic-guards.mdx) to prevent an overwrite of a document modified in a cluster transaction by a change made in another cluster transaction.
+
+
+
+The usage of atomic guards makes cluster-wide transactions conflict-free.
+There is no way to make a conflict between two versions of the same document.
+If a document got updated meanwhile by someone else then a `ConcurrencyException` will be thrown.
+
+
+
+
+
+## ACID for document operations
+
+In RavenDB all actions performed on documents are fully ACID.
+Each document operation or a batch of operations applied to a set of documents sent in a single HTTP request will execute in a single transaction.
+The ACID properties of RavenDB are:
+
+* __Atomicity__
+ All operations are atomic. Either they fully succeed or fail without any partial execution.
+ In particular, operations on multiple documents will be carried out atomically, meaning they are either completed entirely or not executed at all.
+
+* __Consistency and Isolation / Consistency of Scans__
+ Within a single _read_ transaction, all operations are performed under _Snapshot_ isolation.
+ This ensures that even if you access multiple documents, you'll get all of their state exactly as it was at the beginning of the request.
+
+* __Visibility__
+ All changes to the database are immediately made available upon commit.
+ Therefore, if a transaction updates two documents and is committed, you will always see the updates to both documents at the same time.
+ That is, you either see the updates to both, or you don't see the update to either one.
+
+* __Durability__
+ If an operation has been completed successfully, it is fsync'ed to disk.
+ Reads will never return any data that has not been flushed to disk.
+
+All of these constraints are guaranteed for each individual request made to the database when using a [Session](../../client-api/session/what-is-a-session-and-how-does-it-work.mdx).
+In particular, every `Load` call is a separate transaction, and the [`SaveChanges`](../../client-api/session/saving-changes.mdx)
+call will encapsulate all documents created, deleted, or modified within the session into a single transaction.
+
+
+
+## BASE for query operations
+
+The transaction model is different when indexes are involved, because indexes are BASE (Basically Available, Soft state, Eventual consistency), not ACID.
+The indexing in RavenDB will always happen in the background. When you write a new document or update an existing one, RavenDB doesn't wait to update all the indexes before it completes the write operation.
+Instead, it writes the document data and completes the write operation as soon as the transaction is written to disk, scheduling any index updates to occur in an async manner.
+
+There are several reasons for this behavior:
+
+* Writes are faster because they aren't going to be held up by the indexes.
+* Indexes running in an async manner allow to handle updates in batches instead of having to update all the indexes on every write.
+* Indexes are operating independently, so a single slow or expensive index isn't going to impact any other indexes or the overall write performance in the system.
+* Indexes can be added dynamically and on the fly to busy production systems.
+* Indexes can be updated in a [side-by-side manner](../../indexes/creating-and-deploying.mdx).
+
+The BASE model means that the following constraints are applied to query operations:
+
+* __Basically Available__
+ Index query results will be always available but they might be stale.
+
+* __Soft state__
+ The state of the system could change over time because some amount of time is needed to perform the indexing.
+ This is an incremental operation; the fewer documents remain to index, the more accurate index results we have.
+
+* __Eventual consistency__
+ The database will eventually become consistent once it stops receiving new documents and the indexing operation finishes.
+
+The async nature of RavenDB indexes means that you need to be aware that, by default, writes will complete without waiting for indexes.
+Although there are ways to wait for the indexes to complete as part of the write or even during the read (although that is not recommended).
+Please read a dedicated article about the [stale indexes](../../indexes/stale-indexes.mdx).
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/faq/what-is-a-collection.mdx b/versioned_docs/version-7.1/client-api/faq/what-is-a-collection.mdx
new file mode 100644
index 0000000000..ba336ae6d0
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/faq/what-is-a-collection.mdx
@@ -0,0 +1,86 @@
+---
+title: "FAQ: What is a Collection"
+hide_table_of_contents: true
+sidebar_label: What is a Collection
+sidebar_position: 0
+---
+
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+# FAQ: What is a Collection
+
+
+* **A collection** in RavenDB is a set of documents tagged with the same `@collection` metadata property.
+ Every document belongs to exactly one collection.
+
+* Being a schemaless database, there is no requirement that documents in the same collection will share the same structure,
+ although typically, a collection holds similarly structured documents based on the entity type of the document.
+
+* The collection is just a virtual concept.
+ There is no influence on how or where documents within the same collection are physically stored.
+
+* Collections are used throughout many RavenDB features, such as defining indexes, setting revisions, and much more.
+
+* In this page:
+ * [Collection Name Generation](../../client-api/faq/what-is-a-collection.mdx#collection-name-generation)
+ * [Collection Usages](../../client-api/faq/what-is-a-collection.mdx#collection-usages)
+
+* For more information see [Documents and Collections](../../studio/database/documents/documents-and-collections.mdx)
+
+
+
+## Collection Name Generation
+
+**When storing an entity from the client:**
+
+* The document collection metadata is generated **based on the stored entity object type**.
+
+* By default, the client pluralizes the collection name based on the type name.
+ e.g. storing an entity of type `Order` will generate the collection name `Orders`.
+
+* The function that is responsible for tagging the documents can be overridden.
+ See: [Global Identifier Generation Conventions](../../client-api/configuration/identifier-generation/global.mdx#findtypetagname-and-finddynamictagname).
+
+----
+
+**When creating a new document through the Studio:**
+
+* The collection metadata is generated **based on the document ID prefix**.
+ e.g Documents that are created with the following IDs: `users|23` / `users/45` / `users/17`
+ will all belong to the same `Users` collection.
+
+* For more information see [Documents and Collections](../../studio/database/documents/documents-and-collections.mdx)
+
+
+## Collection Usages
+
+* **A Collection Query**
+ * RavenDB keeps an internal storage index per collection created.
+ This internal index is used to query the database and retrieve only documents from a specified collection.
+
+* **In Indexing**
+ * Each [Map Index](../../indexes/map-indexes.mdx) is built against a single collection (or muliple collections when using a [Multi-Map Index](../../indexes/multi-map-indexes.mdx).
+ During the indexing process, the index function iterates only over the documents that belong to the specified collection(s).
+
+* **In Revisions**
+ * Documents [Revisions](../../document-extensions/revisions/overview.mdx) can be defined per collection.
+
+* **In Ongoing Tasks**
+ * A [RavenDB ETL Task](../../server/ongoing-tasks/etl/raven.mdx) & [SQL ETL Task](../../server/ongoing-tasks/etl/sql.mdx) are defined on specified collections.
+
+* **The @hilo Collection**
+ * The ranges of available IDs values returned by [HiLo algorithm](../../client-api/document-identifiers/hilo-algorithm.mdx) are per collection name.
+ Learn more in: [The @hilo Collection](../../studio/database/documents/documents-and-collections.mdx#the-@hilo-collection)
+
+* **The @empty Collection**
+ * Learn more in: [The @empty Collection](../../studio/database/documents/documents-and-collections.mdx#the-@empty-collection)
+
+
+
+----
+
diff --git a/versioned_docs/version-7.1/client-api/how-to/_category_.json b/versioned_docs/version-7.1/client-api/how-to/_category_.json
new file mode 100644
index 0000000000..2e1b0cd793
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/how-to/_category_.json
@@ -0,0 +1,4 @@
+{
+ "position": 17,
+ "label": How to...,
+}
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/how-to/_handle-document-relationships-csharp.mdx b/versioned_docs/version-7.1/client-api/how-to/_handle-document-relationships-csharp.mdx
new file mode 100644
index 0000000000..33cf49e9e2
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/how-to/_handle-document-relationships-csharp.mdx
@@ -0,0 +1,740 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+One of the design principles that RavenDB adheres to is the idea that documents are independent,
+meaning all data required to process a document is stored within the document itself.
+However, this doesn't mean there should not be relations between objects.
+
+There are valid scenarios where we need to define relationships between objects.
+By doing so, we expose ourselves to one major problem: whenever we load the containing entity,
+we are going to need to load data from the referenced entities as well (unless we are not interested in them).
+While the alternative of storing the whole entity in every object graph it is referenced in seems cheaper at first,
+this proves to be quite costly in terms of database resources and network traffic.
+
+RavenDB offers three elegant approaches to solve this problem. Each scenario will need to use one or more of them.
+When applied correctly, they can drastically improve performance, reduce network bandwidth, and speed up development.
+
+
+* In this page:
+ * [Denormalization](../../client-api/how-to/handle-document-relationships.mdx#denormalization)
+ * [Includes](../../client-api/how-to/handle-document-relationships.mdx#includes)
+ * [One to many includes](../../client-api/how-to/handle-document-relationships.mdx#one-to-many-includes)
+ * [Secondary level includes](../../client-api/how-to/handle-document-relationships.mdx#secondary-level-includes)
+ * [Dictionary includes](../../client-api/how-to/handle-document-relationships.mdx#dictionary-includes)
+ * [Dictionary includes: complex types](../../client-api/how-to/handle-document-relationships.mdx#dictionary-includes-complex-types)
+ * [Combining approaches](../../client-api/how-to/handle-document-relationships.mdx#combining-approaches)
+ * [Summary](../../client-api/how-to/handle-document-relationships.mdx#summary)
+
+## Denormalization
+
+The easiest solution is to denormalize the data within the containing entity,
+forcing it to contain the actual value of the referenced entity in addition to (or instead of) the foreign key.
+
+Take this JSON document for example:
+
+
+
+{`// Order document with ID: orders/1-A
+\{
+ "Customer": \{
+ "Name": "Itamar",
+ "Id": "customers/1-A"
+ \},
+ "Items": [
+ \{
+ "Product": \{
+ "Id": "products/1-A",
+ "Name": "Milk",
+ "Cost": 2.3
+ \},
+ "Quantity": 3
+ \}
+ ]
+\}
+`}
+
+
+
+As you can see, the `Order` document now contains denormalized data from both the `Customer` and the `Product` documents which are saved elsewhere in full.
+Note we won't have copied all the customer properties into the order;
+instead we just clone the ones that we care about when displaying or processing an order.
+This approach is called *denormalized reference*.
+
+The denormalization approach avoids many cross document lookups and results in only the necessary data being transmitted over the network,
+but it makes other scenarios more difficult. For example, consider the following entity structure as our start point:
+
+
+
+{`public class Order
+\{
+ public string CustomerId \{ get; set; \}
+
+ public string[] SupplierIds \{ get; set; \}
+
+ public Referral Referral \{ get; set; \}
+
+ public LineItem[] LineItems \{ get; set; \}
+
+ public double TotalPrice \{ get; set; \}
+\}
+`}
+
+
+
+
+
+{`public class Customer
+\{
+ public string Id \{ get; set; \}
+
+ public string Name \{ get; set; \}
+\}
+`}
+
+
+
+If we know that whenever we load an `Order` from the database we will need to know the customer's name and address,
+we could decide to create a denormalized `Order.Customer` field and store those details directly in the `Order` object.
+Obviously, the password and other irrelevant details will not be denormalized:
+
+
+
+{`public class DenormalizedCustomer
+\{
+ public string Id \{ get; set; \}
+
+ public string Name \{ get; set; \}
+
+ public string Address \{ get; set; \}
+\}
+`}
+
+
+
+There wouldn't be a direct reference between the `Order` and the `Customer`.
+Instead, `Order` holds a `DenormalizedCustomer`, which contains the interesting bits from `Customer` that we need whenever we process `Order` objects.
+
+But what happens when the user's address is changed? We will have to perform an aggregate operation to update all orders this customer has made.
+What if the customer has a lot of orders or changes their address frequently? Keeping these details in sync could become very demanding on the server.
+What if another process that works with orders needs a different set of customer properties?
+The `DenormalizedCustomer` will need to be expanded, possibly to the point that the majority of the customer record is cloned.
+
+
+**Denormalization** is a viable solution for rarely changing data or for data that must remain the same despite the underlying referenced data changing over time.
+
+
+
+
+## Includes
+
+The **Includes** feature addresses the limitations of denormalization.
+Instead of one object containing copies of the properties from another object,
+it is only necessary to hold a reference to the second object, which can be:
+
+* a Document (as described below)
+* a [Document Revision](../../document-extensions/revisions/client-api/session/including.mdx)
+* a [Counter](../../document-extensions/counters/counters-and-other-features.mdx#including-counters)
+* a [Time series](../../document-extensions/timeseries/client-api/session/include/overview.mdx)
+* a [Compare exchange value](../../client-api/operations/compare-exchange/include-compare-exchange.mdx)
+
+The server can then be instructed to pre-load the referenced object at the same time that the root object is retrieved, using:
+
+
+
+{`Order order = session
+ .Include(x => x.CustomerId)
+ .Load("orders/1-A");
+
+// this will not require querying the server!
+Customer customer = session
+ .Load(order.CustomerId);
+`}
+
+
+
+Above we are asking RavenDB to retrieve the `Order` `orders/1-A`, and at the same time "include" the `Customer` referenced by the `Order.CustomerId` property.
+The second call to `Load()` is resolved completely client side (i.e. without a second request to the RavenDB server)
+because the relevant `Customer` object has already been retrieved (this is the full `Customer` object not a denormalized version).
+
+There is also a possibility to load multiple documents:
+
+
+
+{`Dictionary orders = session
+ .Include(x => x.CustomerId)
+ .Load("orders/1-A", "orders/2-A");
+
+foreach (Order order in orders.Values)
+\{
+ // this will not require querying the server!
+ Customer customer = session.Load(order.CustomerId);
+\}
+`}
+
+
+
+You can also use Includes with queries:
+
+
+
+
+{`IList orders = session
+ .Query()
+ .Include(o => o.CustomerId)
+ .Where(x => x.TotalPrice > 100)
+ .ToList();
+
+foreach (Order order in orders)
+{
+ // this will not require querying the server!
+ Customer customer = session
+ .Load(order.CustomerId);
+}
+`}
+
+
+
+
+{`IList orders = session
+ .Query()
+ .Include(i => i
+ .IncludeDocuments(x => x.CustomerId) //single document
+ .IncludeCounter("OrderUpdateCount")) //fluent builder can include counters as well
+ .Where(x => x.TotalPrice > 100)
+ .ToList();
+
+foreach (Order order in orders)
+{
+ // this will not require querying the server!
+ Customer customer = session
+ .Load(order.CustomerId);
+}
+`}
+
+
+
+
+{`IList orders = session
+ .Advanced
+ .DocumentQuery()
+ .Include(x => x.CustomerId)
+ .WhereGreaterThan(x => x.TotalPrice, 100)
+ .ToList();
+
+foreach (Order order in orders)
+{
+ // this will not require querying the server!
+ Customer customer = session
+ .Load(order.CustomerId);
+}
+`}
+
+
+
+
+{`from Orders
+where TotalPrice > 100
+include CustomerId
+`}
+
+
+
+
+{`from Orders as o
+where TotalPrice > 100
+include CustomerId,counters(o,'OrderUpdateCount')
+`}
+
+
+
+
+This works because RavenDB has two channels through which it can return information in response to a load request.
+The first is the Results channel, through which the root object retrieved by the `Load()` method call is returned.
+The second is the Includes channel, through which any included documents are sent back to the client.
+Client side, those included documents are not returned from the `Load()` method call, but they are added to the session unit of work,
+and subsequent requests to load them are served directly from the session cache, without requiring any additional queries to the server.
+
+
+Embedded and builder variants of Include clause are essentially syntax sugar and are equivalent at the server side.
+
+
+
+Streaming query results does not support the includes feature.
+Learn more in [How to Stream Query Results](../../client-api/session/querying/how-to-stream-query-results.mdx#stream-related-documents).
+
+### One to many includes
+
+Include can be used with a many to one relationship.
+In the above classes, an `Order` has a property `SupplierIds` which contains an array of references to `Supplier` documents.
+The following code will cause the suppliers to be pre-loaded:
+
+
+
+{`Order order = session
+ .Include(x => x.SupplierIds)
+ .Load("orders/1-A");
+
+foreach (string supplierId in order.SupplierIds)
+\{
+ // this will not require querying the server!
+ Supplier supplier = session.Load(supplierId);
+\}
+`}
+
+
+
+Alternatively, it is possible to use the fluent builder syntax.
+
+
+
+{`var order = session.Load(
+ "orders/1-A",
+ i => i.IncludeDocuments(x => x.SupplierIds));
+
+foreach (string supplierId in order.SupplierIds)
+\{
+ // this will not require querying the server!
+ var supplier = session.Load(supplierId);
+\}
+`}
+
+
+
+The calls to `Load()` within the `foreach` loop will not require a call to the server as the `Supplier` objects will already be loaded into the session cache.
+
+Multi-loads are also possible:
+
+
+
+{`Dictionary orders = session
+ .Include(x => x.SupplierIds)
+ .Load("orders/1-A", "orders/2-A");
+
+foreach (Order order in orders.Values)
+\{
+ foreach (string supplierId in order.SupplierIds)
+ \{
+ // this will not require querying the server!
+ Supplier supplier = session.Load(supplierId);
+ \}
+\}
+`}
+
+
+### Secondary level includes
+
+An Include does not need to work only on the value of a top level property within a document.
+It can be used to load a value from a secondary level.
+In the classes above, the `Order` contains a `Referral` property which is of the type:
+
+
+
+{`public class Referral
+\{
+ public string CustomerId \{ get; set; \}
+
+ public double CommissionPercentage \{ get; set; \}
+\}
+`}
+
+
+
+This class contains an identifier for a `Customer`.
+The following code will include the document referenced by that secondary level identifier:
+
+
+
+{`Order order = session
+ .Include(x => x.Referral.CustomerId)
+ .Load("orders/1-A");
+
+// this will not require querying the server!
+Customer customer = session.Load(order.Referral.CustomerId);
+`}
+
+
+
+It is possible to execute the same code with the fluent builder syntax:
+
+
+
+{`var order = session.Load(
+ "orders/1-A",
+ i => i.IncludeDocuments(x => x.Referral.CustomerId));
+
+// this will not require querying the server!
+Customer customer = session.Load(order.Referral.CustomerId);
+`}
+
+
+
+The alternative way is to provide a string-based path:
+
+
+
+{`Order order = session.Include("Referral.CustomerId")
+ .Load("orders/1-A");
+
+// this will not require querying the server!
+Customer customer = session.Load(order.Referral.CustomerId);
+`}
+
+
+
+With the fluent builder syntax, it is also possible to use a string-based path:
+
+
+
+{`var order = session.Load(
+ "orders/1-A",
+ i => i.IncludeDocuments("Referral.CustomerId"));
+
+// this will not require querying the server!
+Customer customer = session.Load(order.Referral.CustomerId);
+`}
+
+
+
+This secondary level include will also work with collections.
+The `Order.LineItems` property holds a collection of `LineItem` objects which each contain a reference to a `Product`:
+
+
+
+{`public class LineItem
+\{
+ public string ProductId \{ get; set; \}
+
+ public string Name \{ get; set; \}
+
+ public int Quantity \{ get; set; \}
+\}
+`}
+
+
+
+The `Product` documents can be included using the following syntax:
+
+
+
+{`Order order = session
+ .Include(x => x.LineItems.Select(l => l.ProductId))
+ .Load("orders/1-A");
+
+foreach (LineItem lineItem in order.LineItems)
+\{
+ // this will not require querying the server!
+ Product product = session.Load(lineItem.ProductId);
+\}
+`}
+
+
+
+The fluent builder syntax works here too.
+
+
+
+{`var order = session.Load(
+ "orders/1-A",
+ i => i.IncludeDocuments(x => x.LineItems.Select(l => l.ProductId)));
+
+foreach (LineItem lineItem in order.LineItems)
+\{
+ // this will not require querying the server!
+ Product product = session.Load(lineItem.ProductId);
+\}
+`}
+
+
+
+The `Select()` within the Include tells RavenDB which property of secondary level objects to use as a reference.
+
+
+### String path conventions
+
+When using string-based includes like:
+
+
+
+{`Order order = session.Include("Referral.CustomerId")
+ .Load("orders/1-A");
+
+// this will not require querying the server!
+Customer customer = session.Load(order.Referral.CustomerId);
+`}
+
+
+
+you must remember to follow certain rules that must apply to the provided string path:
+
+1. **Dots** are used to separate properties
+ e.g. `"Referral.CustomerId"` in the example above means that our `Order` contains property `Referral` and that property contains another property called `CustomerId`.
+
+2. **Indexer operator** is used to indicate that property is a collection type.
+ So if our `Order` has a list of LineItems and each `LineItem` contains a `ProductId` property, then we can create string path as follows: `"LineItems[].ProductId"`.
+
+3. **Prefixes** can be used to indicate the prefix of the identifier of the document that is going to be included.
+ It can be useful when working with custom or semantic identifiers.
+ For example, if you have a customer stored under `customers/login@domain.com` then you can include it using `"Referral.CustomerEmail(customers/)"` (`customers/` is the prefix here).
+
+Learning string path rules may be useful when you will want to query database using HTTP API.
+
+
+
+{`curl -X GET "http://localhost:8080/databases/Northwind/docs?id=orders/1-A&include=Lines[].Product"
+`}
+
+
+
+
+### Dictionary includes
+
+Dictionary keys and values can also be used when doing includes. Consider following scenario:
+
+
+
+{`public class Person
+\{
+ public string Id \{ get; set; \}
+
+ public string Name \{ get; set; \}
+
+ public Dictionary Attributes \{ get; set; \}
+\}
+`}
+
+
+
+
+
+{`session.Store(
+ new Person
+ \{
+ Id = "people/1-A",
+ Name = "John Doe",
+ Attributes = new Dictionary
+ \{
+ \{ "Mother", "people/2" \},
+ \{ "Father", "people/3" \}
+ \}
+ \});
+
+session.Store(
+ new Person
+ \{
+ Id = "people/2",
+ Name = "Helen Doe",
+ Attributes = new Dictionary()
+ \});
+
+session.Store(
+ new Person
+ \{
+ Id = "people/3",
+ Name = "George Doe",
+ Attributes = new Dictionary()
+ \});
+`}
+
+
+
+Now we want to include all documents that are under dictionary values:
+
+
+
+{`var person = session
+ .Include(x => x.Attributes.Values)
+ .Load("people/1-A");
+
+var mother = session
+ .Load(person.Attributes["Mother"]);
+
+var father = session
+ .Load(person.Attributes["Father"]);
+
+Assert.Equal(1, session.Advanced.NumberOfRequests);
+`}
+
+
+
+The code above can be also rewritten with fluent builder syntax:
+
+
+
+{`var person = session.Load(
+ "people/1-A",
+ i => i.IncludeDocuments(x => x.Attributes.Values));
+
+var mother = session
+ .Load(person.Attributes["Mother"]);
+
+var father = session
+ .Load(person.Attributes["Father"]);
+
+Assert.Equal(1, session.Advanced.NumberOfRequests);
+`}
+
+
+
+You can also include values from dictionary keys:
+
+
+
+{`var person = session
+ .Include(x => x.Attributes.Keys)
+ .Load("people/1-A");
+`}
+
+
+
+Here, as well, this can be written with fluent builder syntax:
+
+
+
+{`var person = session
+ .Load("people/1-A",
+ i => i.IncludeDocuments(x => x.Attributes.Keys));
+`}
+
+
+### Dictionary includes: complex types
+
+If values in dictionary are more complex, e.g.
+
+
+
+{`public class PersonWithAttribute
+\{
+ public string Id \{ get; set; \}
+
+ public string Name \{ get; set; \}
+
+ public Dictionary Attributes \{ get; set; \}
+\}
+
+public class Attribute
+\{
+ public string Ref \{ get; set; \}
+\}
+`}
+
+
+
+
+
+{`session.Store(
+ new PersonWithAttribute
+ \{
+ Id = "people/1-A",
+ Name = "John Doe",
+ Attributes = new Dictionary
+ \{
+ \{ "Mother", new Attribute \{ Ref = "people/2" \} \},
+ \{ "Father", new Attribute \{ Ref = "people/3" \} \}
+ \}
+ \});
+
+session.Store(
+ new Person
+ \{
+ Id = "people/2",
+ Name = "Helen Doe",
+ Attributes = new Dictionary()
+ \});
+
+session.Store(
+ new Person
+ \{
+ Id = "people/3",
+ Name = "George Doe",
+ Attributes = new Dictionary()
+ \});
+`}
+
+
+
+We can also do includes on specific properties:
+
+
+
+{`var person = session
+ .Include(x => x.Attributes.Values.Select(v => v.Ref))
+ .Load("people/1-A");
+
+var mother = session
+ .Load(person.Attributes["Mother"].Ref);
+
+var father = session
+ .Load(person.Attributes["Father"].Ref);
+
+Assert.Equal(1, session.Advanced.NumberOfRequests);
+`}
+
+
+
+
+
+## Combining approaches
+
+It is possible to combine the above techniques.
+Using the `DenormalizedCustomer` from above and creating an order that uses it:
+
+
+
+{`public class Order3
+\{
+ public DenormalizedCustomer Customer \{ get; set; \}
+
+ public string[] SupplierIds \{ get; set; \}
+
+ public Referral Referral \{ get; set; \}
+
+ public LineItem[] LineItems \{ get; set; \}
+
+ public double TotalPrice \{ get; set; \}
+\}
+`}
+
+
+
+We have the advantages of a denormalization, a quick and simple load of an `Order`,
+and the fairly static `Customer` details that are required for most processing.
+But we also have the ability to easily and efficiently load the full `Customer` object when necessary using:
+
+
+
+{`Order3 order = session
+ .Include(x => x.Customer.Id)
+ .Load("orders/1-A");
+
+// this will not require querying the server!
+Customer customer = session.Load(order.Customer.Id);
+`}
+
+
+
+This combining of denormalization and Includes could also be used with a list of denormalized objects.
+
+It is possible to use Include on a query being a projection.
+Includes are evaluated after the projection has been evaluated.
+This opens up the possibility of implementing Tertiary Includes (i.e. retrieving documents that are referenced by documents that are referenced by the root document).
+
+RavenDB can support Tertiary Includes, but before resorting to them you should re-evaluate your document model.
+Needing Tertiary Includes can be an indication that you are designing your documents along "Relational" lines.
+
+
+
+## Summary
+
+There are no strict rules as to when to use which approach, but the general idea is to give it a lot of thought and consider the implications each approach has.
+
+As an example, in an e-commerce application it might be better to denormalize product names and prices into an order line object
+since you want to make sure the customer sees the same price and product title in the order history.
+But the customer name and addresses should probably be references rather than denormalized into the order entity.
+
+For most cases where denormalization is not an option, Includes are probably the answer.
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/how-to/_handle-document-relationships-java.mdx b/versioned_docs/version-7.1/client-api/how-to/_handle-document-relationships-java.mdx
new file mode 100644
index 0000000000..593aa63ba1
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/how-to/_handle-document-relationships-java.mdx
@@ -0,0 +1,850 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+One of the design principles that RavenDB adheres to is the idea that documents are independent,
+meaning all data required to process a document is stored within the document itself.
+However, this doesn't mean there should not be relations between objects.
+
+There are valid scenarios where we need to define relationships between objects.
+By doing so, we expose ourselves to one major problem: whenever we load the containing entity,
+we are going to need to load data from the referenced entities as well (unless we are not interested in them).
+While the alternative of storing the whole entity in every object graph it is referenced in seems cheaper at first,
+this proves to be quite costly in terms of database resources and network traffic.
+
+RavenDB offers three elegant approaches to solve this problem. Each scenario will need to use one or more of them.
+When applied correctly, they can drastically improve performance, reduce network bandwidth, and speed up development.
+
+## Denormalization
+
+The easiest solution is to denormalize the data within the containing entity,
+forcing it to contain the actual value of the referenced entity in addition to (or instead of) the foreign key.
+
+Take this JSON document for example:
+
+
+
+{`// Order document with ID: orders/1-A
+\{
+ "Customer": \{
+ "Name": "Itamar",
+ "Id": "customers/1-A"
+ \},
+ "Items": [
+ \{
+ "Product": \{
+ "Id": "products/1-A",
+ "Name": "Milk",
+ "Cost": 2.3
+ \},
+ "Quantity": 3
+ \}
+ ]
+\}
+`}
+
+
+
+As you can see, the `Order` document now contains denormalized data from both the `Customer` and the `Product` documents which are saved elsewhere in full.
+Note we won't have copied all the customer fields into the order; instead we just clone the ones that we care about when displaying or processing an order.
+This approach is called *denormalized reference*.
+
+The denormalization approach avoids many cross document lookups and results in only the necessary data being transmitted over the network,
+but it makes other scenarios more difficult. For example, consider the following entity structure as our start point:
+
+
+
+{`public class Order \{
+ private String customerId;
+ private String[] supplierIds;
+ private Referral referral;
+ private LineItem[] lineItems;
+ private double totalPrice;
+
+ public String getCustomerId() \{
+ return customerId;
+ \}
+
+ public void setCustomerId(String customerId) \{
+ this.customerId = customerId;
+ \}
+
+ public String[] getSupplierIds() \{
+ return supplierIds;
+ \}
+
+ public void setSupplierIds(String[] supplierIds) \{
+ this.supplierIds = supplierIds;
+ \}
+
+ public Referral getReferral() \{
+ return referral;
+ \}
+
+ public void setReferral(Referral referral) \{
+ this.referral = referral;
+ \}
+
+ public LineItem[] getLineItems() \{
+ return lineItems;
+ \}
+
+ public void setLineItems(LineItem[] lineItems) \{
+ this.lineItems = lineItems;
+ \}
+
+ public double getTotalPrice() \{
+ return totalPrice;
+ \}
+
+ public void setTotalPrice(double totalPrice) \{
+ this.totalPrice = totalPrice;
+ \}
+\}
+`}
+
+
+
+
+
+{`public class Customer \{
+ private String id;
+ private String name;
+
+ public String getId() \{
+ return id;
+ \}
+
+ public void setId(String id) \{
+ this.id = id;
+ \}
+
+ public String getName() \{
+ return name;
+ \}
+
+ public void setName(String name) \{
+ this.name = name;
+ \}
+\}
+`}
+
+
+
+If we know that whenever we load an `Order` from the database we will need to know the customer's name and address,
+we could decide to create a denormalized `Order.Customer` field and store those details directly in the `Order` object.
+Obviously, the password and other irrelevant details will not be denormalized:
+
+
+
+{`public class DenormalizedCustomer \{
+ private String id;
+ private String name;
+ private String address;
+
+ public String getId() \{
+ return id;
+ \}
+
+ public void setId(String id) \{
+ this.id = id;
+ \}
+
+ public String getName() \{
+ return name;
+ \}
+
+ public void setName(String name) \{
+ this.name = name;
+ \}
+
+ public String getAddress() \{
+ return address;
+ \}
+
+ public void setAddress(String address) \{
+ this.address = address;
+ \}
+\}
+`}
+
+
+
+There wouldn't be a direct reference between the `Order` and the `Customer`.
+Instead, `Order` holds a `DenormalizedCustomer`, which contains the interesting bits from `Customer` that we need whenever we process `Order` objects.
+
+But what happens when the user's address is changed?
+We will have to perform an aggregate operation to update all orders this customer has made.
+What if the customer has a lot of orders or changes their address frequently?
+Keeping these details in sync could become very demanding on the server.
+What if another process that works with orders needs a different set of customer fields?
+The `DenormalizedCustomer` will need to be expanded, possibly to the point that the majority of the customer record is cloned.
+
+
+**Denormalization** is a viable solution for rarely changing data or for data that must remain the same despite the underlying referenced data changing over time.
+
+
+## Includes
+
+The **Includes** feature addresses the limitations of denormalization.
+Instead of one object containing copies of the fields from another object, it is only necessary to hold a reference to the second object.
+Then the server can be instructed to pre-load the referenced document at the same time that the root object is retrieved. We do this using:
+
+
+
+{`Order order = session
+ .include("CustomerId")
+ .load(Order.class, "orders/1-A");
+
+// this will not require querying the server!
+Customer customer = session.load(Customer.class, order.getCustomerId());
+`}
+
+
+
+Above we are asking RavenDB to retrieve the `Order` `orders/1-A`, and at the same time "include" the `Customer` referenced by the `Order.CustomerId` field.
+The second call to `load()` is resolved completely client side (i.e. without a second request to the RavenDB server)
+because the relevant `Customer` object has already been retrieved (this is the full `Customer` object not a denormalized version).
+
+There is also a possibility to load multiple documents:
+
+
+
+{`Map orders = session
+ .include("CustomerId")
+ .load(Order.class, "orders/1-A", "orders/2-A");
+
+for (Order order : orders.values()) \{
+ Customer customer = session.load(Customer.class, order.getCustomerId());
+\}
+`}
+
+
+
+You can also use Includes with queries:
+
+
+
+
+{`List orders = session
+ .query(Order.class)
+ .include("CustomerId")
+ .whereGreaterThan("TotalPrice", 100)
+ .toList();
+
+for (Order order : orders) {
+ // this will not require querying the server!
+ Customer customer = session
+ .load(Customer.class, order.getCustomerId());
+}
+`}
+
+
+
+
+{`List orders = session
+ .query(Order.class)
+ .include(i -> i.
+ includeDocuments("CustomerId").
+ includeCounter("OrderUpdateCount"))
+ .whereGreaterThan("TotalPrice", 100)
+ .toList();
+
+for (Order order : orders) {
+ // this will not require querying the server!
+ Customer customer = session
+ .load(Customer.class, order.getCustomerId());
+}
+`}
+
+
+
+
+{`from Orders
+where TotalPrice > 100
+include CustomerId
+`}
+
+
+
+
+{`from Orders as o
+where TotalPrice > 100
+include CustomerId,counters(o,'OrderUpdateCount')
+`}
+
+
+
+
+This works because RavenDB has two channels through which it can return information in response to a load request.
+The first is the Results channel, through which the root object retrieved by the `load()` method call is returned.
+The second is the Includes channel, through which any included documents are sent back to the client.
+Client side, those included documents are not returned from the `load()` method call, but they are added to the session unit of work,
+and subsequent requests to load them are served directly from the session cache, without requiring any additional queries to the server.
+
+
+Embedded and builder variants of Include clause are essentially syntax sugar and are equivalent at the server side.
+
+
+
+Streaming query results does not support the includes feature.
+Learn more in [How to Stream Query Results](../../client-api/session/querying/how-to-stream-query-results.mdx#stream-related-documents).
+
+
+### One to many includes
+
+Include can be used with a many to one relationship.
+In the above classes, an `Order` has a field `SupplierIds` which contains an array of references to `Supplier` documents.
+The following code will cause the suppliers to be pre-loaded:
+
+
+
+{`Order order = session
+ .include("SupplierIds")
+ .load(Order.class, "orders/1-A");
+
+for (String supplierId : order.getSupplierIds()) \{
+ // this will not require querying the server!
+ Supplier supplier = session.load(Supplier.class, supplierId);
+\}
+`}
+
+
+
+Alternatively, it is possible to use the fluent builder syntax.
+
+
+
+{`Order order = session.load(Order.class, "orders/1-A",
+ i -> i.includeDocuments("SupplierIds"));
+
+for (String supplierId : order.getSupplierIds()) \{
+ // this will not require querying the server!
+ Supplier supplier = session.load(Supplier.class, supplierId);
+\}
+`}
+
+
+
+The calls to `load()` within the `foreach` loop will not require a call to the server as the `Supplier` objects will already be loaded into the session cache.
+
+Multi-loads are also possible:
+
+
+
+{`Map orders = session
+ .include("SupplierIds")
+ .load(Order.class, "orders/1-A", "orders/2-A");
+
+for (Order order : orders.values()) \{
+ for (String supplierId : order.getSupplierIds()) \{
+ // this will not require querying the server!
+
+ Supplier supplier = session.load(Supplier.class, supplierId);
+ \}
+\}
+`}
+
+
+
+### Secondary level includes
+
+An Include does not need to work only on the value of a top level field within a document.
+It can be used to load a value from a secondary level.
+In the classes above, the `Order` contains a `Referral` field which is of the type:
+
+
+
+{`public class Referral \{
+ private String customerId;
+ private double commissionPercentage;
+
+ public String getCustomerId() \{
+ return customerId;
+ \}
+
+ public void setCustomerId(String customerId) \{
+ this.customerId = customerId;
+ \}
+
+ public double getCommissionPercentage() \{
+ return commissionPercentage;
+ \}
+
+ public void setCommissionPercentage(double commissionPercentage) \{
+ this.commissionPercentage = commissionPercentage;
+ \}
+\}
+`}
+
+
+
+This class contains an identifier for a `Customer`. The following code will include the document referenced by that secondary level identifier:
+
+
+
+{`Order order = session
+ .include("Referral.CustomerId")
+ .load(Order.class, "orders/1-A");
+
+// this will not require querying the server!
+Customer customer = session.load(Customer.class, order.getReferral().getCustomerId());
+`}
+
+
+
+It is possible to execute the same code with the fluent builder syntax:
+
+
+
+{`Order order = session
+ .load(Order.class, "orders/1-A",
+ i -> i.includeDocuments("Referral.CustomerId"));
+
+// this will not require querying the server!
+Customer customer = session.load(Customer.class, order.getReferral().getCustomerId());
+`}
+
+
+
+This secondary level include will also work with collections.
+The `Order.LineItems` field holds a collection of `LineItem` objects which each contain a reference to a `Product`:
+
+
+
+{`public class LineItem \{
+ private String productId;
+ private String name;
+ private int quantity;
+
+ public String getProductId() \{
+ return productId;
+ \}
+
+ public void setProductId(String productId) \{
+ this.productId = productId;
+ \}
+
+ public String getName() \{
+ return name;
+ \}
+
+ public void setName(String name) \{
+ this.name = name;
+ \}
+
+ public int getQuantity() \{
+ return quantity;
+ \}
+
+ public void setQuantity(int quantity) \{
+ this.quantity = quantity;
+ \}
+\}
+`}
+
+
+
+The `Product` documents can be included using the following syntax:
+
+
+
+{`Order order = session
+ .include("LineItems[].ProductId")
+ .load(Order.class, "orders/1-A");
+
+for (LineItem lineItem : order.getLineItems()) \{
+ // this will not require querying the server!
+ Product product = session.load(Product.class, lineItem.getProductId());
+\}
+`}
+
+
+
+The fluent builder syntax works here too.
+
+
+
+{`Order order = session.load(Order.class, "orders/1-A",
+ i -> i.includeDocuments("LineItems[].ProductId"));
+
+for (LineItem lineItem : order.getLineItems()) \{
+ // this will not require querying the server!
+ Product product = session.load(Product.class, lineItem.getProductId());
+\}
+`}
+
+
+
+The `[]` within the `include` tells RavenDB which field of secondary level objects to use as a reference.
+
+
+### String path conventions
+
+When using string-based includes like:
+
+
+
+{`Order order = session
+ .include("Referral.CustomerId")
+ .load(Order.class, "orders/1-A");
+
+// this will not require querying the server!
+Customer customer = session.load(Customer.class, order.getReferral().getCustomerId());
+`}
+
+
+
+you must remember to follow certain rules that must apply to the provided string path:
+
+1. **Dots** are used to separate fields
+ e.g. `"Referral.CustomerId"` in the example above means that our `Order` contains field `Referral` and that field contains another field called `CustomerId`.
+
+2. **Indexer operator** is used to indicate that field is a collection type.
+ So if our `Order` has a list of LineItems and each `LineItem` contains a `ProductId` field, then we can create string path as follows: `"LineItems[].ProductId"`.
+
+3. **Prefixes** can be used to indicate the prefix of the identifier of the document that is going to be included.
+ It can be useful when working with custom or semantic identifiers.
+ For example, if you have a customer stored under `customers/login@domain.com` then you can include it
+ using `"Referral.CustomerEmail(customers/)"` (`customers/` is the prefix here).
+
+Learning string path rules may be useful when you will want to query database using HTTP API.
+
+
+
+{`curl -X GET "http://localhost:8080/databases/Northwind/docs?id=orders/1-A&include=lines[].product"
+`}
+
+
+
+
+
+### Dictionary includes
+
+Dictionary keys and values can also be used when doing includes. Consider following scenario:
+
+
+
+{`public class Person \{
+ private String id;
+ private String name;
+ private Map attributes;
+
+ public String getId() \{
+ return id;
+ \}
+
+ public void setId(String id) \{
+ this.id = id;
+ \}
+
+ public String getName() \{
+ return name;
+ \}
+
+ public void setName(String name) \{
+ this.name = name;
+ \}
+
+ public Map getAttributes() \{
+ return attributes;
+ \}
+
+ public void setAttributes(Map attributes) \{
+ this.attributes = attributes;
+ \}
+\}
+`}
+
+
+
+
+
+{`HashMap attributes1 = new HashMap<>();
+attributes1.put("Mother", "people/2");
+attributes1.put("Father", "people/3");
+
+Person person1 = new Person();
+person1.setId("people/1-A");
+person1.setName("John Doe");
+person1.setAttributes(attributes1);
+
+session.store(person1);
+
+Person person2 = new Person();
+person2.setId("people/2");
+person2.setName("Helen Doe");
+person2.setAttributes(Collections.emptyMap());
+
+session.store(person2);
+
+Person person3 = new Person();
+person3.setId("people/3");
+person3.setName("George Doe");
+person3.setAttributes(Collections.emptyMap());
+
+session.store(person3);
+`}
+
+
+
+Now we want to include all documents that are under dictionary values:
+
+
+
+{`Person person = session.include("Attributes.Values")
+ .load(Person.class, "people/1-A");
+
+Person mother = session
+ .load(Person.class, person.getAttributes().get("Mother"));
+
+Person father = session
+ .load(Person.class, person.getAttributes().get("Father"));
+
+Assert.assertEquals(1, session.advanced().getNumberOfRequests());
+`}
+
+
+
+The code above can be also rewritten with fluent builder syntax:
+
+
+
+{`Person person = session.load(Person.class, "people/1-A",
+ i -> i.includeDocuments("Attributes.Values"));
+
+Person mother = session
+ .load(Person.class, person.getAttributes().get("Mother"));
+
+Person father = session
+ .load(Person.class, person.getAttributes().get("Father"));
+
+Assert.assertEquals(1, session.advanced().getNumberOfRequests());
+`}
+
+
+
+You can also include values from dictionary keys:
+
+
+
+{`Person person = session
+ .include("Attributes.Keys")
+ .load(Person.class, "people/1-A");
+`}
+
+
+
+Here, as well, this can be written with fluent builder syntax:
+
+
+
+{`Person person = session
+ .load(Person.class, "people/1-A",
+ i -> i.includeDocuments("Attributes.Keys"));
+`}
+
+
+
+#### Complex types
+
+If values in dictionary are more complex e.g.
+
+
+
+{`public class PersonWithAttribute \{
+ private String id;
+ private String name;
+ private Map attributes;
+
+ public String getId() \{
+ return id;
+ \}
+
+ public void setId(String id) \{
+ this.id = id;
+ \}
+
+ public String getName() \{
+ return name;
+ \}
+
+ public void setName(String name) \{
+ this.name = name;
+ \}
+
+ public Map getAttributes() \{
+ return attributes;
+ \}
+
+ public void setAttributes(Map attributes) \{
+ this.attributes = attributes;
+ \}
+\}
+
+public class Attribute \{
+ private String ref;
+
+ public Attribute() \{
+ \}
+
+ public Attribute(String ref) \{
+ this.ref = ref;
+ \}
+
+ public String getRef() \{
+ return ref;
+ \}
+
+ public void setRef(String ref) \{
+ this.ref = ref;
+ \}
+\}
+`}
+
+
+
+
+
+{`HashMap attributes = new HashMap<>();
+attributes.put("Mother", new Attribute("people/2"));
+attributes.put("Father", new Attribute("people/3"));
+
+PersonWithAttribute person1 = new PersonWithAttribute();
+person1.setId("people/1-A");
+person1.setName("John Doe");
+person1.setAttributes(attributes);
+
+session.store(person1);
+
+Person person2 = new Person();
+person2.setId("people/2");
+person2.setName("Helen Doe");
+person2.setAttributes(Collections.emptyMap());
+
+session.store(person2);
+
+Person person3 = new Person();
+person3.setId("people/3");
+person3.setName("George Doe");
+person3.setAttributes(Collections.emptyMap());
+
+session.store(person3);
+`}
+
+
+
+We can also do includes on specific fields:
+
+
+
+{`PersonWithAttribute person = session
+ .include("Attributes[].Ref")
+ .load(PersonWithAttribute.class, "people/1-A");
+
+Person mother = session
+ .load(Person.class, person.getAttributes().get("Mother").getRef());
+
+Person father = session
+ .load(Person.class, person.getAttributes().get("Father").getRef());
+
+Assert.assertEquals(1, session.advanced().getNumberOfRequests());
+`}
+
+
+
+## Combining approaches
+
+It is possible to combine the above techniques.
+Using the `DenormalizedCustomer` from above and creating an order that uses it:
+
+
+
+{`public class Order3 \{
+ private DenormalizedCustomer customer;
+ private String[] supplierIds;
+ private Referral referral;
+ private LineItem[] lineItems;
+ private double totalPrice;
+
+ public DenormalizedCustomer getCustomer() \{
+ return customer;
+ \}
+
+ public void setCustomer(DenormalizedCustomer customer) \{
+ this.customer = customer;
+ \}
+
+ public String[] getSupplierIds() \{
+ return supplierIds;
+ \}
+
+ public void setSupplierIds(String[] supplierIds) \{
+ this.supplierIds = supplierIds;
+ \}
+
+ public Referral getReferral() \{
+ return referral;
+ \}
+
+ public void setReferral(Referral referral) \{
+ this.referral = referral;
+ \}
+
+ public LineItem[] getLineItems() \{
+ return lineItems;
+ \}
+
+ public void setLineItems(LineItem[] lineItems) \{
+ this.lineItems = lineItems;
+ \}
+
+ public double getTotalPrice() \{
+ return totalPrice;
+ \}
+
+ public void setTotalPrice(double totalPrice) \{
+ this.totalPrice = totalPrice;
+ \}
+\}
+`}
+
+
+
+We have the advantages of a denormalization, a quick and simple load of an `Order`, and the fairly static `Customer` details that are required for most processing.
+But we also have the ability to easily and efficiently load the full `Customer` object when necessary using:
+
+
+
+{`Order3 order = session
+ .include("Customer.Id")
+ .load(Order3.class, "orders/1-A");
+
+// this will not require querying the server!
+Customer customer = session.load(Customer.class, order.getCustomer().getId());
+`}
+
+
+
+This combining of denormalization and Includes could also be used with a list of denormalized objects.
+
+It is possible to use Include on a query being a projection. Includes are evaluated after the projection has been evaluated.
+This opens up the possibility of implementing Tertiary Includes
+(i.e. retrieving documents that are referenced by documents that are referenced by the root document).
+
+RavenDB can support Tertiary Includes, but before resorting to them you should re-evaluate your document model.
+Needing Tertiary Includes can be an indication that you are designing your documents along "Relational" lines.
+
+## Summary
+
+There are no strict rules as to when to use which approach,
+but the general idea is to give it a lot of thought and consider the implications each approach has.
+
+As an example, in an e-commerce application it might be better to denormalize product names and prices into an order line object
+since you want to make sure the customer sees the same price and product title in the order history.
+But the customer name and addresses should probably be references rather than denormalized into the order entity.
+
+For most cases where denormalization is not an option, Includes are probably the answer.
+
+
diff --git a/versioned_docs/version-7.1/client-api/how-to/_handle-document-relationships-nodejs.mdx b/versioned_docs/version-7.1/client-api/how-to/_handle-document-relationships-nodejs.mdx
new file mode 100644
index 0000000000..1a857418ff
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/how-to/_handle-document-relationships-nodejs.mdx
@@ -0,0 +1,737 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+One of the design principles that RavenDB adheres to is the idea that documents are independent,
+meaning all data required to process a document is stored within the document itself.
+However, this doesn't mean there should not be relations between objects.
+
+There are valid scenarios where we need to define relationships between objects.
+By doing so, we expose ourselves to one major problem: whenever we load the containing entity,
+we are going to need to load data from the referenced entities as well (unless we are not interested in them).
+While the alternative of storing the whole entity in every object graph it is referenced in seems cheaper at first,
+this proves to be quite costly in terms of database resources and network traffic.
+
+RavenDB offers three elegant approaches to solve this problem. Each scenario will need to use one or more of them.
+When applied correctly, they can drastically improve performance, reduce network bandwidth, and speed up development.
+
+
+* In this page:
+ * [Denormalization](../../client-api/how-to/handle-document-relationships.mdx#denormalization)
+ * [Includes](../../client-api/how-to/handle-document-relationships.mdx#includes)
+ * [One to many includes](../../client-api/how-to/handle-document-relationships.mdx#one-to-many-includes)
+ * [Secondary level includes](../../client-api/how-to/handle-document-relationships.mdx#secondary-level-includes)
+ * [Dictionary includes](../../client-api/how-to/handle-document-relationships.mdx#dictionary-includes)
+ * [Dictionary includes: complex types](../../client-api/how-to/handle-document-relationships.mdx#dictionary-includes-complex-types)
+ * [Combining approaches](../../client-api/how-to/handle-document-relationships.mdx#combining-approaches)
+ * [Summary](../../client-api/how-to/handle-document-relationships.mdx#summary)
+
+## Denormalization
+
+The easiest solution is to denormalize the data within the containing entity,
+forcing it to contain the actual value of the referenced entity in addition to (or instead of) the foreign key.
+
+Take this JSON document for example:
+
+
+
+{`// Order document with ID: orders/1-A
+\{
+ "customer": \{
+ "id": "customers/1-A",
+ "name": "Itamar"
+ \},
+ "items": [
+ \{
+ "product": \{
+ "id": "products/1-A",
+ "name": "Milk",
+ "cost": 2.3
+ \},
+ "quantity": 3
+ \}
+ ]
+\}
+`}
+
+
+
+As you can see, the `Order` document now contains denormalized data from both the `Customer` and the `Product` documents which are saved elsewhere in full.
+Note we won't have copied all the customer properties into the order;
+instead we just clone the ones that we care about when displaying or processing an order.
+This approach is called _denormalized reference_.
+
+The denormalization approach avoids many cross document lookups and results in only the necessary data being transmitted over the network,
+but it makes other scenarios more difficult. For example, consider the following entity structure as our start point:
+
+
+
+{`class Order \{
+ constructor(
+ customerId = '',
+ supplierIds = [],
+ referral = null,
+ lineItems = [],
+ totalPrice = 0
+ ) \{
+ Object.assign(this, \{
+ customerId,
+ supplierIds,
+ referral,
+ lineItems,
+ totalPrice
+ \});
+ \}
+\}
+`}
+
+
+
+
+
+{`class Customer \{
+ constructor(
+ id = '',
+ name = ''
+ ) \{
+ Object.assign(this, \{
+ id,
+ name
+ \});
+ \}
+\}
+`}
+
+
+
+If we know that whenever we load an `Order` from the database we will need to know the customer's name and address,
+we could decide to create a denormalized `Order.customer` field and store those details directly in the `Order` object.
+Obviously, the password and other irrelevant details will not be denormalized:
+
+
+
+{`class DenormalizedCustomer \{
+ constructor(
+ id = '',
+ name = '',
+ address = ''
+ ) \{
+ Object.assign(this, \{
+ id,
+ name,
+ address
+ \});
+ \}
+\}
+`}
+
+
+
+There wouldn't be a direct reference between the `Order` and the `Customer`.
+Instead, `Order` holds a `DenormalizedCustomer`, which contains the interesting bits from `Customer` that we need whenever we process `Order` objects.
+
+But what happens when the user's address is changed? We will have to perform an aggregate operation to update all orders this customer has made.
+What if the customer has a lot of orders or changes their address frequently? Keeping these details in sync could become very demanding on the server.
+What if another process that works with orders needs a different set of customer properties?
+The `DenormalizedCustomer` will need to be expanded, possibly to the point that the majority of the customer record is cloned.
+
+
+**Denormalization** is a viable solution for rarely changing data or for data that must remain the same despite the underlying referenced data changing over time.
+
+
+
+
+## Includes
+
+The **Includes** feature addresses the limitations of denormalization.
+Instead of one object containing copies of the properties from another object,
+it is only necessary to hold a reference to the second object, which can be:
+
+* a Document (as described below)
+* a [Document Revision](../../document-extensions/revisions/client-api/session/including.mdx)
+* a [Counter](../../document-extensions/counters/counters-and-other-features.mdx#including-counters)
+* a [Time series](../../document-extensions/timeseries/client-api/session/include/overview.mdx)
+* a [Compare exchange value](../../client-api/operations/compare-exchange/include-compare-exchange.mdx)
+
+The server can then be instructed to pre-load the referenced object at the same time that the root object is retrieved, using:
+
+
+
+{`const order = await session
+ // Call 'include'
+ // Pass the path of the document property that holds document to include
+ .include("customerId")
+ .load("orders/1-A");
+
+const customer = await session
+ // This call to 'load' will not require querying the server
+ // No server request will be made
+ .load(order.customerId);
+`}
+
+
+
+Above we are asking RavenDB to retrieve the `Order` `orders/1-A`, and at the same time "include" the `Customer` referenced by the `customerId` property.
+The second call to `load()` is resolved completely client side (i.e. without a second request to the RavenDB server)
+because the relevant `Customer` object has already been retrieved (this is the full `Customer` object not a denormalized version).
+
+There is also a possibility to load multiple documents:
+
+
+
+{`const orders = await session
+ .include("customerId")
+ .load(["orders/1-A", "orders/2-A"]);
+
+const orderEntities = Object.entries(orders);
+
+for (let i = 0; i < orderEntities.length; i++) \{
+ // This will not require querying the server
+ const customer = await session.load(orderEntities[i][1].customerId);
+\}
+`}
+
+
+
+You can also use Includes with queries:
+
+
+
+
+{`const orders = await session
+ .query({ collection: "orders" })
+ .whereGreaterThan("totalPrice", 100)
+ .include("customerId")
+ .all();
+
+for (let i = 0; i < orders.length; i++) {
+ // This will not require querying the server
+ const customer = await session.load(orders[i].customerId);
+}
+`}
+
+
+
+
+{`const orders = await session
+ .query({ collection: "orders" })
+ .whereGreaterThan("totalPrice", 100)
+ .include(i => i
+ .includeDocuments("customerId") // include document
+ .includeCounter("OrderUpdateCounter")) // builder can include counters as well
+ .all();
+
+for (let i = 0; i < orders.length; i++) {
+ // This will not require querying the server
+ const customer = await session.load(orders[i].customerId);
+}
+`}
+
+
+
+
+{`from "orders"
+where totalPrice > 100
+include customerId
+`}
+
+
+
+
+{`from "orders" as o
+where totalPrice > 100
+include customerId, counters(o,'OrderUpdateCount')
+`}
+
+
+
+
+This works because RavenDB has two channels through which it can return information in response to a load request.
+The first is the Results channel, through which the root object retrieved by the `load()` method call is returned.
+The second is the Includes channel, through which any included documents are sent back to the client.
+Client side, those included documents are not returned from the `load()` method call, but they are added to the session unit of work,
+and subsequent requests to load them are served directly from the session cache, without requiring any additional queries to the server.
+
+
+Embedded and builder variants of `include` clause are essentially syntax sugar and are equivalent at the server side.
+
+
+
+Streaming query results does not support the includes feature.
+Learn more in [How to Stream Query Results](../../client-api/session/querying/how-to-stream-query-results.mdx#stream-related-documents).
+
+### One to many includes
+
+Include can be used with a many to one relationship.
+In the above classes, an `Order` has a property `supplierIds` which contains an array of references to `Supplier` documents.
+The following code will cause the suppliers to be pre-loaded:
+
+
+
+{`const order = await session
+ .include("supplierIds")
+ .load("orders/1-A");
+
+for (let i = 0; i < order.supplierIds.length; i++) \{
+ // This will not require querying the server
+ const supplier = await session.load(order.supplierIds[i]);
+\}
+`}
+
+
+
+Alternatively, it is possible to use the fluent builder syntax.
+
+
+
+{`const order = await session
+ .load("orders/1-A", \{
+ includes: i => i.includeDocuments("supplierIds")
+ \});
+
+for (let i = 0; i < order.supplierIds.length; i++) \{
+ // This will not require querying the server
+ const supplier = await session.load(order.supplierIds[i]);
+\}
+`}
+
+
+
+The calls to `load()` within the `for` loop will not require a call to the server as the `Supplier` objects will already be loaded into the session cache.
+
+Multi-loads are also possible:
+
+
+
+{`const orders = await session
+ .include("supplierIds")
+ .load(["orders/1-A", "orders/2-A"]);
+
+const orderEntities = Object.entries(orders);
+
+for (let i = 0; i < orderEntities.length; i++) \{
+ const suppliers = orderEntities[i][1].supplierIds;
+
+ for (let j = 0; j < suppliers.length; j++) \{
+ // This will not require querying the server
+ const supplier = await session.load(suppliers[j]);
+ \}
+\}
+`}
+
+
+### Secondary level includes
+
+An Include does not need to work only on the value of a top level property within a document.
+It can be used to load a value from a secondary level.
+In the classes above, the `Order` contains a `referral` property which is of the type:
+
+
+
+{`class Referral \{
+ constructor(
+ customerId = '',
+ commissionPercentage = 0
+ ) \{
+ Object.assign(this, \{
+ customerId,
+ commissionPercentage
+ \});
+ \}
+\}
+`}
+
+
+
+This class contains an identifier for a `Customer`.
+The following code will include the document referenced by that secondary level identifier:
+
+
+
+{`const order = await session
+ .include("referral.customerId")
+ .load("orders/1-A");
+
+// This will not require querying the server
+const customer = await session.load(order.referral.customerId);
+`}
+
+
+
+It is possible to execute the same code with the fluent builder syntax:
+
+
+
+{`const order = await session
+ .load("orders/1-A", \{
+ includes: i => i.includeDocuments("referral.customerId")
+ \});
+
+// This will not require querying the server
+const customer = await session.load(order.referral.customerId);
+`}
+
+
+
+This secondary level include will also work with collections.
+The `lineItems` property holds a collection of `LineItem` objects which each contain a reference to a `Product`:
+
+
+
+{`class LineItem \{
+ constructor(
+ productId = '',
+ name = '',
+ quantity = 0
+ ) \{
+ Object.assign(this, \{
+ productId,
+ name,
+ quantity
+ \});
+ \}
+\}
+`}
+
+
+
+The `Product` documents can be included using the following syntax:
+
+
+
+{`const order = await session
+ .include("lineItems[].productId")
+ .load("orders/1-A");
+
+for (let i = 0; i < order.lineItems.length; i++) \{
+ // This will not require querying the server
+ const product = await session.load(order.lineItems[i].productId);
+\}
+`}
+
+
+
+The fluent builder syntax works here too.
+
+
+
+{`const order = await session
+ .load("orders/1-A", \{
+ includes: i => i.includeDocuments("lineItems[].productId")
+ \});
+
+for (let i = 0; i < order.lineItems.length; i++) \{
+ // This will not require querying the server
+ const product = await session.load(order.lineItems[i].productId);
+\}
+`}
+
+
+
+
+### String path conventions
+
+When using string-based includes like:
+
+
+
+{`const order = await session
+ .include("referral.customerId")
+ .load("orders/1-A");
+
+// This will not require querying the server
+const customer = await session.load(order.referral.customerId);
+`}
+
+
+
+you must remember to follow certain rules that must apply to the provided string path:
+
+1. **Dots** are used to separate properties
+ e.g. `"referral.customerId"` in the example above means that our `Order` contains property `referral` and that property contains another property called `customerId`.
+
+2. **Indexer operator** is used to indicate that property is a collection type.
+ So if our `Order` has a list of LineItems and each `lineItem` contains a `productId` property, then we can create string path as follows: `"lineItems[].productId"`.
+
+3. **Prefixes** can be used to indicate the prefix of the identifier of the document that is going to be included.
+ It can be useful when working with custom or semantic identifiers.
+ For example, if you have a customer stored under `customers/login@domain.com` then you can include it using `"referral.customerEmail(customers/)"` (`customers/` is the prefix here).
+
+Learning string path rules may be useful when you will want to query database using HTTP API.
+
+
+
+{`curl -X GET "http://localhost:8080/databases/Northwind/docs?id=orders/1-A&include=Lines[].Product"
+`}
+
+
+
+
+### Dictionary includes
+
+Dictionary keys and values can also be used when doing includes. Consider following scenario:
+
+
+
+{`class Person \{
+ constructor(
+ id = '',
+ name = '',
+ // attributes will be assigned a plain object containing key-value pairs
+ attributes = \{\}
+ ) \{
+ Object.assign(this, \{
+ id,
+ name,
+ attributes
+ \});
+ \}
+\}
+`}
+
+
+
+
+
+{`const person1 = new Person();
+person1.name = "John Doe";
+person1.id = "people/1";
+person1.attributes = \{
+ "mother": "people/2",
+ "father": "people/3"
+\}
+
+const person2 = new Person();
+person2.name = "Helen Doe";
+person2.id = "people/2";
+
+const person3 = new Person();
+person3.name = "George Doe";
+person3.id = "people/3";
+
+await session.store(person1);
+await session.store(person2);
+await session.store(person3);
+
+await session.saveChanges();
+`}
+
+
+
+Now we want to include all documents that are under dictionary values:
+
+
+
+{`const person = await session
+ .include("attributes.$Values")
+ .load("people/1");
+
+const mother = await session
+ .load(person.attributes["mother"]);
+
+const father = await session
+ .load(person.attributes["father"]);
+
+assert.equal(session.advanced.numberOfRequests, 1);
+`}
+
+
+
+The code above can be also rewritten with fluent builder syntax:
+
+
+
+{`const person = await session
+ .load("people/1", \{
+ includes: i => i.includeDocuments("attributes.$Values")
+ \});
+
+const mother = await session
+ .load(person.attributes["mother"]);
+
+const father = await session
+ .load(person.attributes["father"]);
+
+assert.equal(session.advanced.numberOfRequests, 1);
+`}
+
+
+
+You can also include values from dictionary keys:
+
+
+
+{`const person = await session
+ .include("attributes.$Keys")
+ .load("people/1");
+`}
+
+
+
+Here, as well, this can be written with fluent builder syntax:
+
+
+
+{`const person = await session
+ .load("people/1", \{
+ includes: i => i.includeDocuments("attributes.$Keys")
+ \});
+`}
+
+
+### Dictionary includes: complex types
+
+If values in dictionary are more complex, e.g.
+
+
+
+{`class PersonWithAttribute \{
+ constructor(
+ id = '',
+ name = '',
+ // attributes will be assigned a complex object
+ attributes = \{\}
+ ) \{
+ Object.assign(this, \{
+ id,
+ name,
+ attributes
+ \});
+ \}
+\}
+
+class Attribute \{
+ constructor(
+ ref = ''
+ ) \{
+ Object.assign(this, \{
+ ref
+ \});
+ \}
+\}
+`}
+
+
+
+
+
+{`const attr2 = new Attribute();
+attr2.ref = "people/2";
+const attr3 = new Attribute();
+attr3.ref = "people/3";
+
+const person1 = new PersonWithAttribute();
+person1.name = "John Doe";
+person1.id = "people/1";
+person1.attributes = \{
+ "mother": attr2,
+ "father": attr3
+\}
+
+const person2 = new Person();
+person2.name = "Helen Doe";
+person2.id = "people/2";
+
+const person3 = new Person();
+person3.name = "George Doe";
+person3.id = "people/3";
+
+await session.store(person1);
+await session.store(person2);
+await session.store(person3);
+
+await session.saveChanges();
+`}
+
+
+
+We can also do includes on specific properties:
+
+
+
+{`const person = await session
+ .include("attributes.$Values[].ref")
+ .load("people/1");
+
+const mother = await session
+ .load(person.attributes["mother"].ref);
+
+const father = await session
+ .load(person.attributes["father"].ref);
+
+assert.equal(session.advanced.numberOfRequests, 1);
+`}
+
+
+
+
+
+## Combining approaches
+
+It is possible to combine the above techniques.
+Using the `DenormalizedCustomer` from above and creating an order that uses it:
+
+
+
+{`class Order2 \{
+ constructor(
+ customer = \{\},
+ supplierIds = '',
+ referral = null,
+ lineItems = [],
+ totalPrice = 0
+ ) \{
+ Object.assign(this, \{
+ customer,
+ supplierIds,
+ referral,
+ lineItems,
+ totalPrice
+ \});
+ \}
+\}
+`}
+
+
+
+We have the advantages of a denormalization, a quick and simple load of an `Order`,
+and the fairly static `Customer` details that are required for most processing.
+But we also have the ability to easily and efficiently load the full `Customer` object when necessary using:
+
+
+
+{`const order = await session
+ .include("customer.id")
+ .load("orders/1-A");
+
+// This will not require querying the server
+const customer = await session.load(order.customer.id);
+`}
+
+
+
+This combining of denormalization and Includes could also be used with a list of denormalized objects.
+
+It is possible to use Include on a query being a projection.
+Includes are evaluated after the projection has been evaluated.
+This opens up the possibility of implementing Tertiary Includes (i.e. retrieving documents that are referenced by documents that are referenced by the root document).
+
+RavenDB can support Tertiary Includes, but before resorting to them you should re-evaluate your document model.
+Needing Tertiary Includes can be an indication that you are designing your documents along "Relational" lines.
+
+
+
+## Summary
+
+There are no strict rules as to when to use which approach, but the general idea is to give it a lot of thought and consider the implications each approach has.
+
+As an example, in an e-commerce application it might be better to denormalize product names and prices into an order line object
+since you want to make sure the customer sees the same price and product title in the order history.
+But the customer name and addresses should probably be references rather than denormalized into the order entity.
+
+For most cases where denormalization is not an option, Includes are probably the answer.
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/how-to/_setup-aggressive-caching-csharp.mdx b/versioned_docs/version-7.1/client-api/how-to/_setup-aggressive-caching-csharp.mdx
new file mode 100644
index 0000000000..6baaec8205
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/how-to/_setup-aggressive-caching-csharp.mdx
@@ -0,0 +1,142 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+## Standard Cache Configuration
+
+The RavenDB client provides a caching mechanism out of the box. The default caching configuration is to cache all requests.
+
+The size of cache can be configured by changing [`MaxHttpCacheSize` convention](../../client-api/configuration/conventions.mdx#maxhttpcachesize).
+
+The client utilizes the notion of the `304 Not Modified` server's response and will serve the data from the cache if available.
+
+## Aggressive Mode
+
+The aggressive caching feature goes even further. Enabling it means that the client doesn't need to ask the server. It will simply return the response directly from a local cache without any usage of `304 Not Modified` status.
+Results will be returned very fast.
+
+Here's how it works: The client subscribes to server notifications using the [Changes API](../changes/what-is-changes-api.mdx). By taking advantage of them, he is able to invalidate cached documents when they are changed.
+The client knows when it can serve the response from the cache, and when it has to send the request to get the up-to-date result.
+
+
+Despite the fact that the aggressive cache uses the notifications to invalidate the cache, it is still possible to get stale data because of the time needed to receive the notification from the server.
+
+
+Options for aggressive caching can be set in the Document Store conventions:
+
+
+
+{`var documentStore = new DocumentStore
+\{
+ Urls = new[] \{ "http://localhost:8080" \},
+ Database = "NorthWind",
+ Conventions =
+ \{
+ AggressiveCache =
+ \{
+ Duration = TimeSpan.FromMinutes(5),
+ Mode = AggressiveCacheMode.TrackChanges
+ \}
+ \}
+\}
+`}
+
+
+
+We can activate this mode globally from the store or per session.
+
+To activate this mode globally from the store we just need to add one of the following lines:
+
+
+
+{`documentStore.AggressivelyCacheFor(TimeSpan.FromMinutes(5));
+
+documentStore.AggressivelyCache(); // Defines the cache duration for 1 day
+`}
+
+
+
+If we want to activate this mode only in the session we need to add this in the session:
+
+
+
+{`using (session.Advanced.DocumentStore.AggressivelyCacheFor(TimeSpan.FromMinutes(5)))
+\{
+ Order user = session.Load("orders/1");
+\}
+`}
+
+
+
+If there is a value in the cache for `orders/1` that is at most 5 minutes old and we haven't got any change notification about it, we can directly return it. The same mechanism works on queries as well:
+
+
+
+{`using (session.Advanced.DocumentStore.AggressivelyCacheFor(TimeSpan.FromMinutes(5)))
+\{
+ List users = session.Query().ToList();
+\}
+`}
+
+
+
+The usage of the notification system means that you can set an aggressive cache duration to a longer period. The document store exposes the method:
+
+
+
+{`using (session.Advanced.DocumentStore.AggressivelyCache())
+\{ \}
+`}
+
+
+
+which is equivalent to:
+
+
+
+{`using (session.Advanced.DocumentStore.AggressivelyCacheFor(TimeSpan.FromDays(1)))
+\{ \}
+`}
+
+
+
+### Disable Change Tracking
+
+The client subscribes to change notifications from the server using the [Changes API](../changes/what-is-changes-api.mdx). You can choose to ignore
+these notifications from the server by changing the `AggressiveCacheMode` in the Document Store conventions.
+
+The modes are:
+* `AggressiveCacheMode.TrackChanges` - The default value. When the server sends a notification that some items (documents or indexes) have changed,
+those items are invalidated from the cache. The next time these items are loaded they will be retrieved from the server.
+* `AggressiveCacheMode.DoNotTrackChanges` - Notifications from the server will be ignored. For the aggressive cache `Duration`, results will be
+retrieved from the cache and may therefore be stale.
+
+
+
+{`documentStore.AggressivelyCacheFor(TimeSpan.FromMinutes(5), AggressiveCacheMode.DoNotTrackChanges);
+
+//Disable change tracking for just one session:
+using (session.Advanced.DocumentStore.AggressivelyCacheFor(TimeSpan.FromMinutes(5),
+ AggressiveCacheMode.DoNotTrackChanges))
+\{ \}
+`}
+
+
+
+### Disable Aggressive Mode
+
+We can disable the aggressive mode by simply using `documentStore.DisableAggressiveCaching();`. In that way we will disable the aggressive caching
+globally in the store. But what if we need to disable the aggressive caching only for a specific call, or to manually update the cache, just like before we can use `DisableAggressiveCaching()`
+per session?
+
+
+
+{`using (session.Advanced.DocumentStore.DisableAggressiveCaching())
+\{
+ Order user = session.Load("orders/1");
+\}
+`}
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/how-to/_setup-aggressive-caching-java.mdx b/versioned_docs/version-7.1/client-api/how-to/_setup-aggressive-caching-java.mdx
new file mode 100644
index 0000000000..896b6238c3
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/how-to/_setup-aggressive-caching-java.mdx
@@ -0,0 +1,141 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+## Standard Cache Configuration
+
+The RavenDB client provides a caching mechanism out of the box. The default caching configuration is to cache all requests.
+
+The size of cache can be configured by changing [`MaxHttpCacheSize` convention](../../client-api/configuration/conventions.mdx#maxhttpcachesize).
+
+The client utilizes the notion of the `304 Not Modified` server's response and will serve the data from the cache if available.
+
+## Aggressive Mode
+
+The aggressive caching feature goes even further. Enabling it means that the client doesn't need to ask the server. It will simply return the response directly from a local cache without any usage of `304 Not Modified` status.
+Results will be returned very fast.
+
+Here's how it works: The client subscribes to server notifications using the [Changes API](../changes/what-is-changes-api.mdx). By taking advantage of them, he is able to invalidate cached documents when they are changed.
+The client knows when it can serve the response from the cache, and when it has to send the request to get the up-to-date result.
+
+
+Despite the fact that the aggressive cache uses the notifications to invalidate the cache, it is still possible to get stale data because of the time needed to receive the notification from the server.
+
+
+Options for aggressive caching can be set in the Document Store conventions:
+
+
+
+{`try (IDocumentStore documentStore = new DocumentStore()) \{
+ DocumentConventions conventions = documentStore.getConventions();
+
+ conventions.aggressiveCache().setDuration(Duration.ofMinutes(5));
+ conventions.aggressiveCache().setMode(AggressiveCacheMode.TRACK_CHANGES);
+ // Do your work here
+\}
+`}
+
+
+
+We can activate this mode globally from the store or per session.
+
+To activate this mode globally from the store we just need to add one of the following lines:
+
+
+
+{`documentStore.aggressivelyCacheFor(Duration.ofMinutes(5));
+
+documentStore.aggressivelyCache(); // Defines the cache duration for 1 day
+`}
+
+
+
+If we want to activate this mode only in the session we need to add this in the session:
+
+
+
+{`try (CleanCloseable cacheScope = session.advanced().getDocumentStore()
+ .aggressivelyCacheFor(Duration.ofMinutes(5))) \{
+ Order user = session.load(Order.class, "orders/1");
+\}
+`}
+
+
+
+If there is a value in the cache for `orders/1` that is at most 5 minutes old and we haven't got any change notification about it, we can directly return it. The same mechanism works on queries as well:
+
+
+
+{`try (CleanCloseable cacheScope = session.advanced().getDocumentStore()
+ .aggressivelyCacheFor(Duration.ofMinutes(5))) \{
+ List orders = session.query(Order.class)
+ .toList();
+\}
+`}
+
+
+
+The usage of the notification system means that you can set an aggressive cache duration to a longer period. The document store exposes the method:
+
+
+
+{`try (CleanCloseable cacheScope = session
+ .advanced().getDocumentStore().aggressivelyCache()) \{
+
+\}
+`}
+
+
+
+which is equivalent to:
+
+
+
+{`try (CleanCloseable cacheScope = session
+ .advanced().getDocumentStore().aggressivelyCacheFor(Duration.ofDays(1))) \{
+
+\}
+`}
+
+
+
+### Disable Change Tracking
+
+The client subscribes to change notifications from the server using the [Changes API](../changes/what-is-changes-api.mdx). You can choose to ignore
+these notifications by changing the `AggressiveCacheMode` in the Document Store conventions.
+
+The modes are:
+* `AggressiveCacheMode.TRACK_CHANGES` - The default value. When the server sends a notification that some items (documents or indexes) have changed,
+those items are invalidated from the cache. The next time these items are loaded they will be retrieved from the server.
+* `AggressiveCacheMode.DO_NOT_TRACK_CHANGES` - Notifications from the server will be ignored. For the aggressive cache `Duration`, results will be
+retrieved from the cache and may therefore be stale.
+
+
+
+{`documentStore.aggressivelyCacheFor(Duration.ofMinutes(5), AggressiveCacheMode.DO_NOT_TRACK_CHANGES);
+
+// Disable change tracking for just one session:
+try (session.advanced().getDocumentStore().aggressivelyCacheFor(Duration.ofMinutes(5),
+ AggressiveCacheMode.DO_NOT_TRACK_CHANGES)) \{
+\}
+`}
+
+
+
+### Disable Aggressive Mode
+
+We can disable the aggressive mode by simply using `documentStore.disableAggressiveCaching();`. In that way we will disable the aggressive caching
+globally in the store. But what if we need to disable the aggressive caching only for a specific call, or to manually update the cache, just like before we can use `disableAggressiveCaching()`
+per session?
+
+
+
+{`try (CleanCloseable cacheScope = session.advanced().getDocumentStore()
+ .disableAggressiveCaching()) \{
+ Order order = session.load(Order.class, "orders/1");
+\}
+`}
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/how-to/assets/excel_connections.png b/versioned_docs/version-7.1/client-api/how-to/assets/excel_connections.png
new file mode 100644
index 0000000000..175d134cb7
Binary files /dev/null and b/versioned_docs/version-7.1/client-api/how-to/assets/excel_connections.png differ
diff --git a/versioned_docs/version-7.1/client-api/how-to/assets/excel_connections_dialog_1.png b/versioned_docs/version-7.1/client-api/how-to/assets/excel_connections_dialog_1.png
new file mode 100644
index 0000000000..a7889fe3a4
Binary files /dev/null and b/versioned_docs/version-7.1/client-api/how-to/assets/excel_connections_dialog_1.png differ
diff --git a/versioned_docs/version-7.1/client-api/how-to/assets/excel_connections_dialog_2.png b/versioned_docs/version-7.1/client-api/how-to/assets/excel_connections_dialog_2.png
new file mode 100644
index 0000000000..ec8b30b4b9
Binary files /dev/null and b/versioned_docs/version-7.1/client-api/how-to/assets/excel_connections_dialog_2.png differ
diff --git a/versioned_docs/version-7.1/client-api/how-to/assets/excel_from_text.png b/versioned_docs/version-7.1/client-api/how-to/assets/excel_from_text.png
new file mode 100644
index 0000000000..b6b112ef83
Binary files /dev/null and b/versioned_docs/version-7.1/client-api/how-to/assets/excel_from_text.png differ
diff --git a/versioned_docs/version-7.1/client-api/how-to/assets/excel_from_text_dialog.png b/versioned_docs/version-7.1/client-api/how-to/assets/excel_from_text_dialog.png
new file mode 100644
index 0000000000..d51cc54ee2
Binary files /dev/null and b/versioned_docs/version-7.1/client-api/how-to/assets/excel_from_text_dialog.png differ
diff --git a/versioned_docs/version-7.1/client-api/how-to/assets/excel_from_text_results.png b/versioned_docs/version-7.1/client-api/how-to/assets/excel_from_text_results.png
new file mode 100644
index 0000000000..6dca152c92
Binary files /dev/null and b/versioned_docs/version-7.1/client-api/how-to/assets/excel_from_text_results.png differ
diff --git a/versioned_docs/version-7.1/client-api/how-to/assets/excel_from_text_select.png b/versioned_docs/version-7.1/client-api/how-to/assets/excel_from_text_select.png
new file mode 100644
index 0000000000..b42944adb6
Binary files /dev/null and b/versioned_docs/version-7.1/client-api/how-to/assets/excel_from_text_select.png differ
diff --git a/versioned_docs/version-7.1/client-api/how-to/assets/excel_from_text_wizard_1.png b/versioned_docs/version-7.1/client-api/how-to/assets/excel_from_text_wizard_1.png
new file mode 100644
index 0000000000..f1c0af7bb2
Binary files /dev/null and b/versioned_docs/version-7.1/client-api/how-to/assets/excel_from_text_wizard_1.png differ
diff --git a/versioned_docs/version-7.1/client-api/how-to/assets/excel_from_text_wizard_2.png b/versioned_docs/version-7.1/client-api/how-to/assets/excel_from_text_wizard_2.png
new file mode 100644
index 0000000000..e5ec649f1f
Binary files /dev/null and b/versioned_docs/version-7.1/client-api/how-to/assets/excel_from_text_wizard_2.png differ
diff --git a/versioned_docs/version-7.1/client-api/how-to/assets/excel_from_text_wizard_3.png b/versioned_docs/version-7.1/client-api/how-to/assets/excel_from_text_wizard_3.png
new file mode 100644
index 0000000000..cb381b8a3b
Binary files /dev/null and b/versioned_docs/version-7.1/client-api/how-to/assets/excel_from_text_wizard_3.png differ
diff --git a/versioned_docs/version-7.1/client-api/how-to/assets/excel_integrated_long_url.png b/versioned_docs/version-7.1/client-api/how-to/assets/excel_integrated_long_url.png
new file mode 100644
index 0000000000..b6f4348551
Binary files /dev/null and b/versioned_docs/version-7.1/client-api/how-to/assets/excel_integrated_long_url.png differ
diff --git a/versioned_docs/version-7.1/client-api/how-to/assets/excel_url_too_long.png b/versioned_docs/version-7.1/client-api/how-to/assets/excel_url_too_long.png
new file mode 100644
index 0000000000..520da3d5ab
Binary files /dev/null and b/versioned_docs/version-7.1/client-api/how-to/assets/excel_url_too_long.png differ
diff --git a/versioned_docs/version-7.1/client-api/how-to/handle-document-relationships.mdx b/versioned_docs/version-7.1/client-api/how-to/handle-document-relationships.mdx
new file mode 100644
index 0000000000..c1a7305b9a
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/how-to/handle-document-relationships.mdx
@@ -0,0 +1,49 @@
+---
+title: "How to Handle Document Relationships"
+hide_table_of_contents: true
+sidebar_label: ...handle document relationships
+sidebar_position: 2
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import HandleDocumentRelationshipsCsharp from './_handle-document-relationships-csharp.mdx';
+import HandleDocumentRelationshipsJava from './_handle-document-relationships-java.mdx';
+import HandleDocumentRelationshipsNodejs from './_handle-document-relationships-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/how-to/integrate-with-excel.mdx b/versioned_docs/version-7.1/client-api/how-to/integrate-with-excel.mdx
new file mode 100644
index 0000000000..aa6a6c2362
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/how-to/integrate-with-excel.mdx
@@ -0,0 +1,210 @@
+---
+title: "Client API: How to Integrate with Excel"
+hide_table_of_contents: true
+sidebar_label: ...integrate with Excel
+sidebar_position: 3
+---
+
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+# Client API: How to Integrate with Excel
+
+A very common use case for many applications is to expose data to users as an Excel file. RavenDB has dedicated support that allows you to directly consume data stored in a database by an Excel application.
+
+The integration of Excel with the data store is achieved by a designated query streaming endpoint that outputs a stream in a format acceptable by `Excel`, Comma Separated Values (CSV).
+
+In order to take advantage of this feature, you need to specify a valid query according to [RQL syntax](../../client-api/session/querying/what-is-rql.mdx).
+
+The generic HTTP request will have the following address:
+
+
+
+{`http://localhost:8080/databases/[db_name]/streams/queries?query=[query]&format=csv
+`}
+
+
+
+In order to include only specific properties in the CSV output you can use the `field` parameter:
+
+
+
+{`http://localhost:8080/databases/[db_name]/streams/queries?query=[query]&field=[field-1]&field=[field-2]...&field=[field-N]&format=csv
+`}
+
+
+
+
+
+In some cases it might be cumbersome to use the URL to send the query or the query might be too long. Please see our [dedicated section](../../client-api/how-to/integrate-with-excel.mdx#dealing-with-long-query-urls-in-excel) that deals with that problem.
+
+
+
+## Example
+
+First let's create a database, Northwind, and import the [sample data](../../studio/database/tasks/create-sample-data.mdx) into it.
+
+Now let's query the product collection include the category document and project some of its properties using the below RQL:
+
+
+
+{`from Products as p
+load p.Category as c
+select
+\{
+ Name: p.Name,
+ Category: c.Name,
+\}
+`}
+
+
+
+In order to execute the above query we will need to use the following URL:
+
+
+
+{`http://localhost:8080/databases/Northwind/streams/queries?query=from%20Products%20as%20p%0Aload%20p.Category%20as%20c%0Aselect%20%0A%7B%0A%20%20%20%20Name%3A%20p.Name%2C%0A%20%20%20%20Category%3A%20c.Name%2C%0A%7D&format=csv
+`}
+
+
+
+Going to the above address in a web browser will download an export.csv file containing following results:
+
+
+
+{`Name,Category
+Chang,Beverages
+Aniseed Syrup,Condiments
+Chef Anton's Cajun Seasoning,Condiments
+Chef Anton's Gumbo Mix,Condiments
+Grandma's Boysenberry Spread,Condiments
+Uncle Bob's Organic Dried Pears,Produce
+Northwoods Cranberry Sauce,Condiments
+Mishi Kobe Niku,Meat/Poultry
+Ikura,Seafood
+Queso Cabrales,Dairy Products
+Queso Manchego La Pastora,Dairy Products
+Konbu,Seafood
+Tofu,Produce
+Genen Shouyu,Condiments
+Pavlova,Confections
+Alice Mutton,Meat/Poultry
+Carnarvon Tigers,Seafood
+`}
+
+
+
+To push them to Excel we need to create a new spreadsheet and import data `From Text`:
+
+
+
+In an Open File Dialog we paste our querying url:
+
+
+
+Next, the Import Wizard will show up where we can adjust our import settings (don't forget to check `Comma` as a desired delimiter):
+
+
+
+
+
+
+
+Finally we need to select where we would like to place the imported data:
+
+
+
+As a result of the previous actions, the spreadsheet data should look like:
+
+
+
+Now we must tell Excel to refresh data. Click on `Connections` in the `Data` panel:
+
+
+
+You will see something like:
+
+
+
+Go to Properties and:
+
+1. **uncheck** `Prompt for file name on refresh`.
+2. **check** `Refresh data when opening the file`.
+
+
+
+You can close the file, change something in the database, and reopen it. You will see new values.
+
+## Dealing with Long Query URLs in Excel
+
+If you try and query for a bit more complex query, you might realize that excel will refuse to execute your request.
+
+### Long Query Example
+
+
+
+{`from Products as p
+load p.Category as c
+select
+\{
+ Name: p.Name,
+ Category: c.Name,
+ Discontinued: p.Discontinued,
+ PricePerUnit: p.PricePerUnit
+\}
+`}
+
+
+
+After escaping the above query we will end up with the following request URL
+
+
+
+{`http://localhost:8080/databases/Northwind/streams/queries?query=from%20Products%20as%20p%0Aload%20p.Category%20as%20c%0Aselect%20%0A%7B%0A%20%20%20%20Name%3A%20p.Name%2C%0A%20%20%20%20Category%3A%20c.Name%2C%0A%20%20%20%20Discontinued%3A%20p.Discontinued%2C%0A%20%20%20%20PricePerUnit%3A%20p.PricePerUnit%0A%7D&format=csv
+`}
+
+
+
+Trying to use this url will throw the following error in excel
+
+
+
+There are two ways to deal with this problem: You can use an online service like [TinyUrl](https://tinyurl.com/) and provide them with the above url.
+
+What you get back is a url like, `https://tinyurl.com/y8t7j6r7`. This is a pretty nice workaround if you're not on an isolated system and have no security restrictions.
+The other option is to redirect the query through a pre-defined query that resides in your database.
+For that you will need to include a document in your database with a `Query` property. Let's generate such a document and call it `Excel/ProductWithCatagory`.
+The name of the document has no significance, but it is recommanded to use a key that reflects the purpose of this document.
+Let's add the `Query` property and set its value to the above query:
+
+
+
+{`\{
+ "Query": "from%20Products%20as%20p%0Aload%20p.Category%20as%20c%0Aselect%20%0A%7B%0A%20%20%20%20Name%3A%20p.Name%2C%0A%20%20%20%20Category%3A%20c.Name%2C%0A%20%20%20%20Discontinued%3A%20p.Discontinued%2C%0A%20%20%20%20PricePerUnit%3A%20p.PricePerUnit%0A%7D",
+ "@metadata": \{
+ "@collection": "Excel"
+ \}
+\}
+`}
+
+
+
+Now that we have the document ready for use, all we need to do is modify our URL so it will use the document redirection feature.
+
+
+
+{`http://localhost:8080/databases/Northwind/streams/queries?fromDocument=Excel%2FProductWithCatagory&format=csv
+`}
+
+
+
+Repeating the instrucions above you should get the following result:
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/how-to/setup-aggressive-caching.mdx b/versioned_docs/version-7.1/client-api/how-to/setup-aggressive-caching.mdx
new file mode 100644
index 0000000000..054df4a26a
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/how-to/setup-aggressive-caching.mdx
@@ -0,0 +1,29 @@
+---
+title: "Client API: How to Setup Aggressive Caching"
+hide_table_of_contents: true
+sidebar_label: ...setup aggressive caching
+sidebar_position: 0
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import SetupAggressiveCachingCsharp from './_setup-aggressive-caching-csharp.mdx';
+import SetupAggressiveCachingJava from './_setup-aggressive-caching-java.mdx';
+
+export const supportedLanguages = ["csharp", "java"];
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/how-to/store-dates.mdx b/versioned_docs/version-7.1/client-api/how-to/store-dates.mdx
new file mode 100644
index 0000000000..279b095087
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/how-to/store-dates.mdx
@@ -0,0 +1,33 @@
+---
+title: "Client API: How to Store Dates in RavenDB Using UTC and Using Local Time"
+hide_table_of_contents: true
+sidebar_label: ...store dates
+sidebar_position: 1
+---
+
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+# Client API: How to Store Dates in RavenDB Using UTC and Using Local Time
+
+When you store a date to RavenDB, it will save whether it's UTC or not. When it's not UTC, a local date is treated as "Unspecified".
+
+However, if you have people from around the world using the same database and you use unspecified local times, the offset is not stored. If you want to deal with this scenario you need to store the date using a `DateTimeOffset` that will store the date and time, and its time zone offset.
+
+The decision of whether to use UTC, Local Time, or `DateTimeOffset` is an application decision, not an infrastructure decision. There are valid reasons for using any one of these.
+
+
+## ISO 8601 Compliance and Default Storing Formats
+
+RavenDB is [ISO 8601](https://www.iso.org/iso-8601-date-and-time-format.html) compliant.
+
+The default storing format for `DateTime` is : **"yyyy'-'MM'-'dd'T'HH':'mm':'ss.fffffff"**
+
+For storing `DateTimeOffset`, RavenDB uses the [Round-trip ("o")](https://docs.microsoft.com/en-us/dotnet/standard/base-types/standard-date-and-time-format-strings#Roundtrip) format
+
+## More Information
+For detailed information about this topic, please refer to the [Working with Date and Time in RavenDB](https://codeofmatt.com/date-and-time-in-ravendb/) article written by Matt Johnson.
diff --git a/versioned_docs/version-7.1/client-api/how-to/subscribe-to-store-events.mdx b/versioned_docs/version-7.1/client-api/how-to/subscribe-to-store-events.mdx
new file mode 100644
index 0000000000..1f4af75905
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/how-to/subscribe-to-store-events.mdx
@@ -0,0 +1,428 @@
+---
+title: "Client API: Subscribing to Store Events"
+hide_table_of_contents: true
+sidebar_label: ...subscribe to Store events
+sidebar_position: 3
+---
+
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+# Client API: Subscribing to Store Events
+
+
+* **Events** allow users to perform custom actions in response to operations made in
+ a `Document Store` or a `Session`.
+
+* An event is invoked when the selected action is executed on an entity,
+ or querying is performed.
+
+* Subscribing to an event at the `DocumentStore` level subscribes to this
+ event in all subsequent sessions.
+
+ E.g., to invoke an event after SaveChanges() is called by **any subsequent session**, use -
+ `store.OnAfterSaveChanges += OnAfterSaveChangesEvent;`
+
+* Subscribing to an event in a `Session` is valid only for this session.
+
+ E.g., to invoke an event after SaveChanges() is called by **this session** only, use -
+ `session.Advanced.OnAfterSaveChanges += OnAfterSaveChangesEvent;`
+
+ Read more about `Session` events [here](../../client-api/session/how-to/subscribe-to-events.mdx).
+
+* In this page:
+ * [Store Events](../../client-api/how-to/subscribe-to-store-events.mdx#store-events)
+ * [OnBeforeRequest](../../client-api/how-to/subscribe-to-store-events.mdx#section)
+ * [OnSucceedRequest](../../client-api/how-to/subscribe-to-store-events.mdx#section-1)
+ * [AfterDispose](../../client-api/how-to/subscribe-to-store-events.mdx#section-2)
+ * [BeforeDispose](../../client-api/how-to/subscribe-to-store-events.mdx#section-3)
+ * [RequestExecutorCreated](../../client-api/how-to/subscribe-to-store-events.mdx#section-4)
+ * [OnSessionCreated](../../client-api/how-to/subscribe-to-store-events.mdx#section-5)
+ * [OnFailedRequest](../../client-api/how-to/subscribe-to-store-events.mdx#section-6)
+ * [OnTopologyUpdated](../../client-api/how-to/subscribe-to-store-events.mdx#section-7)
+ * [Store/Session Events](../../client-api/how-to/subscribe-to-store-events.mdx#store/session-events)
+
+
+## Store Events
+
+You can subscribe to the following events only at the store level, not within a session.
+
+## `OnBeforeRequest`
+
+This event is invoked by sending a request to the server, before the request
+is actually sent.
+It should be defined with this signature:
+
+
+{`private void OnBeforeRequestEvent(object sender, BeforeRequestEventArgs args);
+`}
+
+
+
+**Parameters**:
+
+| Parameter | Type | Description |
+| --------- | ---- | ----------- |
+| **sender** | `IDocumentStore ` | The subscribed store that triggered the event |
+| **args** | `BeforeRequestEventArgs` | See details below |
+
+`BeforeRequestEventArgs`:
+
+
+{`public class BeforeRequestEventArgs : EventArgs
+\{
+ // Database Name
+ public string Database \{ get; \}
+ // Database URL
+ public string Url \{ get; \}
+ // The request intended to be sent to the server
+ public HttpRequestMessage Request \{ get; \}
+ // The number of attempts made to send the request to the server
+ public int AttemptNumber \{ get; \}
+\}
+`}
+
+
+
+* **Example**:
+ To define a method that checks URLs sent in a document store request:
+
+
+{`private void OnBeforeRequestEvent(object sender, BeforeRequestEventArgs args)
+\{
+ var forbiddenURL = new Regex("/databases/[^/]+/docs");
+
+ if (forbiddenURL.IsMatch(args.Url) == true)
+ \{
+ // action to be taken if the URL is forbidden
+ \}
+\}
+`}
+
+
+
+ To subscribe to the event:
+
+
+{`// Subscribe to the event
+store.OnBeforeRequest += OnBeforeRequestEvent;
+`}
+
+
+
+## `OnSucceedRequest`
+
+This event is invoked by receiving a successful reply from the server.
+It should be defined with this signature:
+
+
+{`private void OnSucceedRequestEvent(object sender, SucceedRequestEventArgs args);
+`}
+
+
+
+**Parameters**:
+
+| Parameter | Type | Description |
+| --------- | ---- | ----------- |
+| **sender** | `IDocumentStore ` | The subscribed store that triggered the event |
+| **args** | `SucceedRequestEventArgs` | See details below |
+
+`SucceedRequestEventArgs`:
+
+
+{`public class SucceedRequestEventArgs : EventArgs
+\{
+ // Database Name
+ public string Database \{ get; \}
+ // Database URL
+ public string Url \{ get; \}
+ // The message returned from the server
+ public HttpResponseMessage Response \{ get; \}
+ // The original request sent to the server
+ public HttpRequestMessage Request \{ get; \}
+ // The number of attempts made to send the request to the server
+ public int AttemptNumber \{ get; \}
+\}
+`}
+
+
+
+* **Example**
+ To define a method that would be activated when a request succeeds:
+
+
+{`private void OnSucceedRequestEvent(object sender, SucceedRequestEventArgs args)
+\{
+ if (args.Response.IsSuccessStatusCode == true)
+ \{
+ // action to be taken after a successful request
+ \}
+\}
+`}
+
+
+
+ To subscribe to the event:
+
+
+{`// Subscribe to the event
+store.OnSucceedRequest += OnSucceedRequestEvent;
+`}
+
+
+
+## `AfterDispose`
+This event is invoked immediately after a document store is disposed of.
+It should be defined with this signature:
+
+
+{`private void AfterDisposeEvent(object sender, EventHandler args);
+`}
+
+
+
+**Parameters**:
+
+| Parameter | Type | Description |
+| --------- | ---- | ----------- |
+| **sender** | `IDocumentStore ` | The subscribed store whose disposal triggered the event |
+| **args** | `EventHandler` | **args** has no contents for this event |
+
+## `BeforeDispose`
+This event is invoked immediately before a document store is disposed of.
+It should be defined with this signature:
+
+
+{`private void BeforeDisposeEvent(object sender, EventHandler args);
+`}
+
+
+
+**Parameters**:
+
+| Parameter | Type | Description |
+| --------- | ---- | ----------- |
+| **sender** | `IDocumentStore ` | The subscribed store whose disposal triggered the event |
+| **args** | `EventHandler` | **args** has no contents for this event |
+
+## `RequestExecutorCreated`
+This event is invoked when a Request Executor is created,
+allowing you to subscribe to various events of the request executor.
+It should be defined with this signature:
+
+
+{`private void RequestExecutorCreatedEvent(object sender, RequestExecutor args);
+`}
+
+
+
+**Parameters**:
+
+| Parameter | Type | Description |
+| --------- | ---- | ----------- |
+| **sender** | `IDocumentStore ` | The subscribed store that triggered the event |
+| **args** | `RequestExecutor` | The created Request Executor instance |
+
+## `OnSessionCreated`
+This event is invoked after a session is created, allowing you, for example,
+to change session configurations.
+It should be defined with this signature:
+
+
+{`private void OnSessionCreatedEvent(object sender, SessionCreatedEventArgs args);
+`}
+
+
+
+**Parameters**:
+
+| Parameter | Type | Description |
+| --------- | ---- | ----------- |
+| **sender** | `IDocumentStore ` | The subscribed store that triggered the event |
+| **args** | `SessionCreatedEventArgs` | The created Session |
+
+`SessionCreatedEventArgs`:
+
+
+{`public class SessionCreatedEventArgs : EventArgs
+\{
+ public InMemoryDocumentSessionOperations Session \{ get; \}
+\}
+`}
+
+
+
+* **Example**
+ To define a method that would be activated when a session is created:
+
+
+{`private void OnSessionCreatedEvent(object sender, SessionCreatedEventArgs args)
+\{
+ args.Session.MaxNumberOfRequestsPerSession = 100;
+\}
+`}
+
+
+
+ To subscribe to the event:
+
+
+{`// Subscribe to the event
+store.OnSessionCreated += OnSessionCreatedEvent;
+`}
+
+
+
+
+## `OnFailedRequest`
+This event is invoked before a request fails. It allows you, for example, to track
+and log failed requests.
+It should be defined with this signature:
+
+
+{`private void OnFailedRequestEvent(object sender, FailedRequestEventArgs args);
+`}
+
+
+
+**Parameters**:
+
+| Parameter | Type | Description |
+| --------- | ---- | ----------- |
+| **sender** | `IDocumentStore ` | The subscribed store that triggered the event |
+| **args** | `FailedRequestEventArgs` | See details below |
+
+`FailedRequestEventArgs`:
+
+
+{`public class FailedRequestEventArgs : EventArgs
+\{
+ // Database Name
+ public string Database \{ get; \}
+ // Database URL
+ public string Url \{ get; \}
+ // The exception returned from the server
+ public Exception Exception \{ get; \}
+ // The message returned from the server
+ public HttpResponseMessage Response \{ get; \}
+ // The original request sent to the server
+ public HttpRequestMessage Request \{ get; \}
+\}
+`}
+
+
+
+* **Example**
+ To define a method that would be activated when a request fails:
+
+
+{`private void OnFailedRequestEvent(object sender, FailedRequestEventArgs args)
+\{
+ Logger($"Failed request for database '\{args.Database\}' ('\{args.Url\}')", args.Exception);
+\}
+`}
+
+
+
+ To subscribe to the event:
+
+
+{`// Subscribe to the event
+store.OnFailedRequest += OnFailedRequestEvent;
+`}
+
+
+
+## `OnTopologyUpdated`
+This event is invoked by a topology update (e.g. when a node is added),
+**after** the topology is updated.
+It should be defined with this signature:
+
+
+{`private void OnTopologyUpdatedEvent(object sender, TopologyUpdatedEventArgs args);
+`}
+
+
+
+**Parameters**:
+
+| Parameter | Type | Description |
+| --------- | ---- | ----------- |
+| **sender** | `IDocumentStore ` | The subscribed store that triggered the event |
+| **args** | `TopologyUpdatedEventArgs` | The updated list of nodes |
+
+`TopologyUpdatedEventArgs`:
+
+
+{`public class TopologyUpdatedEventArgs : EventArgs
+\{
+ public Topology Topology \{ get; \}
+\}
+`}
+
+
+
+`Topology`:
+public class Topology
+{
+ public long Etag;
+ public List<ServerNode> Nodes;
+}
+
+* **Example**
+ To define a method that would be activated on a topology update:
+
+
+{`void OnTopologyUpdatedEvent(object sender, TopologyUpdatedEventArgs args)
+\{
+ var topology = args.Topology;
+ if (topology == null)
+ return;
+ for (var i = 0; i < topology.Nodes.Count; i++)
+ \{
+ // perform relevant operations on the nodes after the topology was updated
+ \}
+\}
+`}
+
+
+
+ To subscribe to the event:
+
+
+{`// Subscribe to the event
+store.OnTopologyUpdated += OnTopologyUpdatedEvent;
+`}
+
+
+
+
+
+## Store/Session Events
+You can subscribe to the following events both at the store level and in a session.
+
+
+
+ * Subscribing to an event in a session limits the scope of the subscription to this session.
+ * When you subscribe to an event at the store level, the subscription is inherited by
+ all subsequent sessions.
+
+
+
+* [OnBeforeStore](../../client-api/session/how-to/subscribe-to-events.mdx#onbeforestore)
+* [OnAfterSaveChanges](../../client-api/session/how-to/subscribe-to-events.mdx#onaftersavechanges)
+* [OnBeforeDelete](../../client-api/session/how-to/subscribe-to-events.mdx#onbeforedelete)
+* [OnBeforeQuery](../../client-api/session/how-to/subscribe-to-events.mdx#onbeforequery)
+* [OnBeforeConversionToDocument](../../client-api/session/how-to/subscribe-to-events.mdx#onbeforeconversiontodocument)
+* [OnAfterConversionToDocument](../../client-api/session/how-to/subscribe-to-events.mdx#onafterconversiontodocument)
+* [OnBeforeConversionToEntity](../../client-api/session/how-to/subscribe-to-events.mdx#onbeforeconversiontoentity)
+* [OnAfterConversionToEntity](../../client-api/session/how-to/subscribe-to-events.mdx#onafterconversiontoentity)
+* [OnSessionDisposing](../../client-api/session/how-to/subscribe-to-events.mdx#onsessiondisposing)
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/how-to/using-timeonly-and-dateonly.mdx b/versioned_docs/version-7.1/client-api/how-to/using-timeonly-and-dateonly.mdx
new file mode 100644
index 0000000000..fdb8befd37
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/how-to/using-timeonly-and-dateonly.mdx
@@ -0,0 +1,285 @@
+---
+title: "Client API: How to Use TimeOnly and DateOnly Types"
+hide_table_of_contents: true
+sidebar_label: ...use TimeOnly and DateOnly
+sidebar_position: 1
+---
+
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+# Client API: How to Use TimeOnly and DateOnly Types
+
+
+* To save storage space and streamline your process when you only need to know the date or the time, you can store and query
+ [DateOnly](https://devblogs.microsoft.com/dotnet/date-time-and-time-zone-enhancements-in-net-6/#the-dateonly-type)
+ and [TimeOnly](https://devblogs.microsoft.com/dotnet/date-time-and-time-zone-enhancements-in-net-6/#the-timeonly-type) types
+ instead of `DateTime`. (As of .NET version 6.0+ and RavenDB 5.3+)
+
+* You can now convert `DateTime` or strings written in date/time formats to .NET's
+ `DateOnly` or `TimeOnly` types without slowing down queries and while leaving your existing data as is.
+ * Use `AsDateOnly` or `AsTimeOnly` in a static index ([see examples below](../../client-api/how-to/using-timeonly-and-dateonly.mdx#use--or--in-a-static-index-to-convert-strings-or-datetime))
+ * `AsDateOnly` and `AsTimeOnly` automatically convert strings to ticks for faster querying.
+
+* We convert the types in [static indexes](../../indexes/map-indexes.mdx) so that the conversions and calculations are done behind the scenes
+ and the data is ready for fast queries. ([See sample index below.](../../client-api/how-to/using-timeonly-and-dateonly.mdx#convert-and-use-date/timeonly-without-affecting-your-existing-data))
+
+* In this page:
+ * [About DateOnly and TimeOnly](../../client-api/how-to/using-timeonly-and-dateonly.mdx#about-dateonly-and-timeonly)
+ * [Convert and Use Date/TimeOnly Without Affecting Your Existing Data](../../client-api/how-to/using-timeonly-and-dateonly.mdx#convert-and-use-date/timeonly-without-affecting-your-existing-data)
+ * [Using already existing DateOnly or TimeOnly fields](../../client-api/how-to/using-timeonly-and-dateonly.mdx#using-already-existing-dateonly-or-timeonly-fields)
+
+
+
+## About DateOnly and TimeOnly
+
+These two new C# types are available from .NET 6.0+ (RavenDB 5.3+).
+
+* **DateOnly**
+ According to [Microsoft .NET Blog](https://devblogs.microsoft.com/dotnet/date-time-and-time-zone-enhancements-in-net-6/#the-dateonly-type)
+ DateOnly is ideal for scenarios such as birth dates, anniversaries, hire dates,
+ and other business dates that are not typically associated with any particular time.
+ * See [their usage examples here.](https://devblogs.microsoft.com/dotnet/date-time-and-time-zone-enhancements-in-net-6/#the-dateonly-type)
+
+* **TimeOnly**
+ According to [Microsoft .NET Blog](https://devblogs.microsoft.com/dotnet/date-time-and-time-zone-enhancements-in-net-6/#the-timeonly-type)
+ TimeOnly is ideal for scenarios such as recurring meeting times, daily alarm clock times,
+ or the times that a business opens and closes each day of the week.
+ * See [their usage examples here.](https://devblogs.microsoft.com/dotnet/date-time-and-time-zone-enhancements-in-net-6/#the-timeonly-type)
+
+
+
+## Convert and Use Date/TimeOnly Without Affecting Your Existing Data
+
+RavenDB offers conversion of types in static indexes with the methods [AsDateOnly or AsTimeOnly](../../client-api/how-to/using-timeonly-and-dateonly.mdx#use--or--in-a-static-index-to-convert-strings-or-datetime).
+
+* [Static indexes](../../indexes/indexing-basics.mdx) process new data in the background,
+ including calculations and conversions to DateOnly/TimeOnly values, which can be used as ticks,
+ so that the data is ready at query time when you [query the index](../../indexes/querying/query-index.mdx).
+ * These indexes do all of the calculations on the entire dataset that you define the first time they run, and then they only need to
+ process changes in data.
+
+
+Ticks are faster to compute than other date/time formats because they are [simple numbers](https://docs.microsoft.com/en-us/dotnet/api/system.datetime.ticks?view=net-6.0)
+that represent time since 1-1-0001 at midnight.
+
+If your data is in strings, to use ticks you must create a **static index**
+that computes the conversion from strings to `DateOnly` or `TimeOnly`.
+
+RavenDB automatically converts strings into ticks via `AsDateOnly` or `AsTimeOnly`.
+
+An auto-index will not convert strings into ticks, but will index data as strings.
+By defining a query that creates an auto-index which [orders](../../indexes/querying/sorting.mdx) the strings you can also compare strings,
+though comparing ticks is faster.
+
+
+### Use `AsDateOnly` or `AsTimeOnly` in a static index to convert strings or DateTime
+
+* [Converting Strings to DateOnly or TimeOnly](../../client-api/how-to/using-timeonly-and-dateonly.mdx#converting-strings-with-minimal-cost)
+* [Converting DateTime to DateOnly or TimeOnly](../../client-api/how-to/using-timeonly-and-dateonly.mdx#converting--with-minimal-cost)
+
+#### Converting Strings with minimal cost
+
+The following generic sample is a map index where `AsDateOnly` converts the string `item.StringDateOnlyField` into `DateOnly`.
+
+When the converted data is available in the index, you can inexpensively [query the index](../../indexes/querying/query-index.mdx).
+
+Strings are automatically converted to ticks for faster querying.
+
+
+
+{`// Create a Static Index.
+public class StringAsDateOnlyConversion : AbstractIndexCreationTask
+\{
+ public StringAsDateOnlyConversion()
+ \{
+ // This map index converts strings that are in date format to DateOnly with AsDateOnly().
+ Map = items => from item in items
+ // RavenDB doesn't look for DateOnly or TimeOnly as default types during indexing
+ // so the variables must by wrapped in AsDateDonly() or AsTimeOnly() explicitly.
+ where AsDateOnly(item.DateTimeValue) < AsDateOnly(item.DateOnlyValue).AddDays(-50)
+ select new DateOnlyItem \{ DateOnlyField = AsDateOnly(item.StringDateOnlyField) \};
+ \}
+\}
+
+public class StringItem
+\{
+ public string StringDateOnlyField \{ get; set; \}
+ public object DateTimeValue \{ get; set; \}
+ public object DateOnlyValue \{ get; set; \}
+\}
+
+public class DateOnlyItem
+\{
+ public DateOnly? DateOnlyField \{ get; set; \}
+\};
+`}
+
+
+
+
+RavenDB doesn't look for DateOnly or TimeOnly types as default during indexing
+so the variables must be wrapped in AsDateDonly() or AsTimeOnly() explicitly.
+
+
+Using the static index above, here a string in date format "2022-05-12" is saved, the index above converts it to `DateOnly`, then
+the index is queried.
+
+
+
+{`using (var session = store.OpenSession())
+\{
+ // A string in date format is saved.
+ session.Store(new StringItem()
+ \{
+ StringDateOnlyField = "2022-05-12"
+ \});
+ session.SaveChanges();
+\}
+// This is the index used earlier.
+new StringAsDateOnlyConversion().Execute(store);
+WaitForIndexing(store);
+
+using (var session = store.OpenSession())
+\{
+ var today = new DateOnly(2022, 5, 12);
+ // Query the index created earlier for items which were marked with today's date
+ var element = session.Query()
+ .Where(item => item.DateOnlyField == today)
+ // This is an optional type relaxation for projections
+ .As().Single();
+\}
+`}
+
+
+#### Converting `DateTime` with minimal cost
+
+The following generic sample is a map index that converts `DateTime` into `DateOnly` and saves the values in the index.
+
+Once the converted data is available in the static index, you can inexpensively [query the index](../../indexes/querying/query-index.mdx).
+
+
+
+{`// Create a Static Index.
+public class DateTimeAsDateOnlyConversion : AbstractIndexCreationTask
+\{
+ public DateTimeAsDateOnlyConversion()
+ \{
+ // This map index converts DateTime to DateOnly with AsDateOnly().
+ Map = items => from item in items
+ // RavenDB doesn't look for DateOnly or TimeOnly as default types during indexing
+ // so the variables must by wrapped in AsDateDonly() or AsTimeOnly() explicitly.
+ where AsDateOnly(item.DateTimeValue) < AsDateOnly(item.DateOnlyValue).AddDays(-50)
+ select new DateOnlyItem \{ DateOnlyField = AsDateOnly(item.DateTimeField) \};
+ \}
+\}
+
+public class DateTimeItem
+\{
+ public DateTime? DateTimeField \{ get; set; \}
+ public object DateTimeValue \{ get; set; \}
+ public object DateOnlyValue \{ get; set; \}
+\}
+`}
+
+
+
+
+RavenDB doesn't look for DateOnly or TimeOnly as default types during indexing
+so the variables must be wrapped in AsDateDonly() or AsTimeOnly() explicitly.
+
+
+Using the index above, the following example saves `DateTime.Now`, the type is converted in the index, then
+the index is queried.
+
+
+
+{`using (var session = store.OpenSession())
+\{
+// A DateTime value is saved
+session.Store(new DateTimeItem()
+\{
+ DateTimeField = DateTime.Now
+\});
+session.SaveChanges();
+\}
+// The index above is called and we wait for the index to finish converting
+new DateTimeAsDateOnlyConversion().Execute(store);
+WaitForIndexing(store);
+
+using (var session = store.OpenSession())
+\{
+ // Query the index
+ var today = DateOnly.FromDateTime(DateTime.Now);
+ var element = session.Query()
+ .Where(item => item.DateOnlyField == today)
+ // This is an optional type relaxation for projections
+ .As().Single();
+\}
+`}
+
+
+
+
+
+
+## Using already existing DateOnly or TimeOnly fields
+
+RavenDB doesn't look for DateOnly or TimeOnly as default types during indexing
+so the index must have a field that declares the type as DateOnly or TimeOnly.
+
+
+
+{`public class DateAndTimeOnlyIndex : AbstractIndexCreationTask
+\{
+ public class IndexEntry
+ \{
+
+ public DateOnly DateOnly \{ get; set; \}
+ public int Year \{ get; set; \}
+ public DateOnly DateOnlyString \{ get; set; \}
+ public TimeOnly TimeOnlyString \{ get; set; \}
+ public TimeOnly TimeOnly \{ get; set; \}
+ \}
+
+ public DateAndTimeOnlyIndex()
+ \{
+ Map = dates => from date in dates
+ select new IndexEntry() \{ DateOnly = date.DateOnly, TimeOnly = date.TimeOnly \};
+ \}
+
+\}
+`}
+
+
+
+For example, the following query will find all of the entries that occured between 15:00 and 17:00
+without considering the date.
+
+
+
+{`var after = new TimeOnly(15, 00);
+var before = new TimeOnly(17, 00);
+var result = session
+.Query()
+.Where(i => i.TimeOnly > after && i.TimeOnly < before)
+.ToList();
+`}
+
+
+
+**Querying on Ticks**
+Strings are automatically converted to ticks with [`AsDateOnly` and `AsTimeOnly`](../../client-api/how-to/using-timeonly-and-dateonly.mdx#use--or--in-a-static-index-to-convert-strings-or-datetime).
+
+
+
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/net-client-versions.mdx b/versioned_docs/version-7.1/client-api/net-client-versions.mdx
new file mode 100644
index 0000000000..88956401e2
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/net-client-versions.mdx
@@ -0,0 +1,26 @@
+---
+title: "Client API: .NET Client versions"
+hide_table_of_contents: true
+sidebar_label: .NET Client Versions
+sidebar_position: 5
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import NetClientVersionsCsharp from './_net-client-versions-csharp.mdx';
+
+export const supportedLanguages = ["csharp"];
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/_category_.json b/versioned_docs/version-7.1/client-api/operations/_category_.json
new file mode 100644
index 0000000000..bb8ee2ccfb
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/_category_.json
@@ -0,0 +1,4 @@
+{
+ "position": 7,
+ "label": Operations,
+}
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/_what-are-operations-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/_what-are-operations-csharp.mdx
new file mode 100644
index 0000000000..e37c702e25
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/_what-are-operations-csharp.mdx
@@ -0,0 +1,771 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* The RavenDB Client API is built with the notion of layers.
+ At the top, and what you will usually interact with, are the **[DocumentStore](../../client-api/what-is-a-document-store.mdx)**
+ and the **[Session](../../client-api/session/what-is-a-session-and-how-does-it-work.mdx)**.
+ They, in turn, are built on top of the lower-level **Operations** and **Commands** API.
+
+* **RavenDB provides direct access to this lower-level API**, allowing you to send requests
+ directly to the server via DocumentStore Operations instead of using the higher-level Session API.
+
+* In this page:
+ * [Why use operations](../../client-api/operations/what-are-operations.mdx#why-use-operations)
+ * [How operations work](../../client-api/operations/what-are-operations.mdx#how-operations-work)
+ * **Operation types**:
+ * [Common operations](../../client-api/operations/what-are-operations.mdx#common-operations)
+ * [Maintenance operations](../../client-api/operations/what-are-operations.mdx#maintenance-operations)
+ * [Server-maintenance operations](../../client-api/operations/what-are-operations.mdx#server-maintenance-operations)
+ * [Manage lengthy operations](../../client-api/operations/what-are-operations.mdx#manage-lengthy-operations)
+ * [Wait for completion](../../client-api/operations/what-are-operations.mdx#wait-for-completion)
+ * [Kill operation](../../client-api/operations/what-are-operations.mdx#kill-operation)
+
+
+## Why use operations
+
+* Operations provide **management functionality** that is not available in the context of the session, for example:
+ * Create/delete a database
+ * Execute administrative tasks
+ * Assign permissions
+ * Change server configuration, and more.
+
+* The operations are executed on the DocumentStore and are not part of the session transaction.
+
+* There are some client tasks, such as patching documents, that can be carried out either via the Session ([session.Advanced.Patch()](../../client-api/operations/patching/single-document.mdx#array-manipulation))
+ or via an Operation on the DocumentStore ([PatchOperation](../../client-api/operations/patching/single-document.mdx#operations-api)).
+
+
+
+## How operations work
+
+* **Sending the request**:
+ Each Operation is an encapsulation of a `RavenCommand`.
+ The RavenCommand creates the HTTP request message to be sent to the relevant server endpoint.
+ The DocumentStore `OperationExecutor` sends the request and processes the results.
+* **Target node**:
+ By default, the operation will be executed on the server node that is defined by the [client configuration](../../client-api/configuration/load-balance/overview.mdx#client-logic-for-choosing-a-node).
+ However, server-maintenance operations can be executed on a specific node by using the [ForNode](../../client-api/operations/how-to/switch-operations-to-a-different-node.mdx) method.
+* **Target database**:
+ By default, operations work on the default database defined in the DocumentStore.
+ However, common operations & maintenance operations can operate on a different database by using the [ForDatabase](../../client-api/operations/how-to/switch-operations-to-a-different-database.mdx) method.
+* **Transaction scope**:
+ Operations execute as a single-node transaction.
+ If needed, data will then replicate to the other nodes in the database-group.
+* **Background operations**:
+ Some operations may take a long time to complete and can be awaited for completion.
+ Learn more [below](../../client-api/operations/what-are-operations.mdx#wait-for-completion).
+
+
+
+## Common operations
+
+* All common operations implement the `IOperation` interface.
+ The operation is executed within the **database scope**.
+ Use [ForDatabase](../../client-api/operations/how-to/switch-operations-to-a-different-database.mdx) to operate on a specific database other than the default defined in the store.
+
+* These operations include set-based operations such as _PatchOperation_, _CounterBatchOperation_,
+ document-extensions related operations such as getting/putting an attachment, and more.
+ See all available operations [below](../../client-api/operations/what-are-operations#the-following-common-operations-are-available).
+
+* To execute a common operation request,
+ use the `Send` method on the `Operations` property of the DocumentStore.
+
+#### Example:
+
+
+
+
+{`// Define operation, e.g. get all counters info for a document
+IOperation getCountersOp = new GetCountersOperation("products/1-A");
+
+// Execute the operation by passing the operation to Operations.Send
+CountersDetail allCountersResult = documentStore.Operations.Send(getCountersOp);
+
+// Access the operation result
+int numberOfCounters = allCountersResult.Counters.Count;
+`}
+
+
+
+
+{`// Define operation, e.g. get all counters info for a document
+IOperation getCountersOp = new GetCountersOperation("products/1-A");
+
+// Execute the operation by passing the operation to Operations.Send
+CountersDetail allCountersResult = await documentStore.Operations.SendAsync(getCountersOp);
+
+// Access the operation result
+int numberOfCounters = allCountersResult.Counters.Count;
+`}
+
+
+
+
+##### Syntax:
+
+
+
+
+{`// Available overloads:
+void Send(IOperation operation, SessionInfo sessionInfo = null);
+TResult Send(IOperation operation, SessionInfo sessionInfo = null);
+Operation Send(IOperation operation, SessionInfo sessionInfo = null);
+
+PatchStatus Send(PatchOperation operation);
+PatchOperation.Result Send(PatchOperation operation);
+`}
+
+
+
+
+{`// Available overloads:
+Task SendAsync(IOperation operation,
+ CancellationToken token = default(CancellationToken), SessionInfo sessionInfo = null);
+Task SendAsync(IOperation operation,
+ CancellationToken token = default(CancellationToken), SessionInfo sessionInfo = null);
+Task SendAsync(IOperation operation,
+ CancellationToken token = default(CancellationToken), SessionInfo sessionInfo = null);
+
+Task SendAsync(PatchOperation operation,
+ CancellationToken token = default(CancellationToken));
+Task> SendAsync(PatchOperation operation,
+ CancellationToken token = default(CancellationToken));
+`}
+
+
+
+
+
+
+#### The following common operations are available:
+
+* **Attachments**:
+ [PutAttachmentOperation](../../client-api/operations/attachments/put-attachment.mdx)
+ [GetAttachmentOperation](../../client-api/operations/attachments/get-attachment.mdx)
+ [DeleteAttachmentOperation](../../client-api/operations/attachments/delete-attachment.mdx)
+
+* **Counters**:
+ [CounterBatchOperation](../../client-api/operations/counters/counter-batch.mdx)
+ [GetCountersOperation](../../client-api/operations/counters/get-counters.mdx)
+
+* **Time series**:
+ [TimeSeriesBatchOperation](../../document-extensions/timeseries/client-api/operations/append-and-delete.mdx)
+ [GetMultipleTimeSeriesOperation](../../document-extensions/timeseries/client-api/operations/get.mdx)
+ [GetTimeSeriesOperation](../../document-extensions/timeseries/client-api/operations/get.mdx)
+ GetTimeSeriesStatisticsOperation
+
+* **Revisions**:
+ [GetRevisionsOperation](../../document-extensions/revisions/client-api/operations/get-revisions.mdx)
+ [RevertRevisionsByIdOperation](../../document-extensions/revisions/client-api/operations/revert-document-to-revision.mdx)
+
+* **Patching**:
+ [PatchOperation](../../client-api/operations/patching/single-document.mdx)
+ [PatchByQueryOperation](../../client-api/operations/patching/set-based.mdx)
+
+* **Delete by query**:
+ [DeleteByQueryOperation](../../client-api/operations/common/delete-by-query.mdx)
+
+* **Compare-exchange**:
+ [PutCompareExchangeValueOperation](../../compare-exchange/create-cmpxchg-items#create-item-using-a-store-operation)
+ [GetCompareExchangeValueOperation](../../compare-exchange/get-cmpxchg-item#get-item-using-a-store-operation)
+ [GetCompareExchangeValuesOperation](../../compare-exchange/get-cmpxchg-items)
+ [DeleteCompareExchangeValueOperation](../../compare-exchange/delete-cmpxchg-items#delete-compare-exchange-item-using-a-store-operation)
+
+
+
+
+## Maintenance operations
+
+* All maintenance operations implement the `IMaintenanceOperation` interface.
+ The operation is executed within the **database scope**.
+ Use [ForDatabase](../../client-api/operations/how-to/switch-operations-to-a-different-database.mdx) to operate on a specific database other than the default defined in the store.
+
+* These operations include database management operations such as setting client configuration,
+ managing indexes & ongoing-tasks operations, getting stats, and more.
+ See all available maintenance operations [below](../../client-api/operations/what-are-operations#the-following-maintenance-operations-are-available).
+
+* To execute a maintenance operation request,
+ use the `Send` method on the `Maintenance` property in the DocumentStore.
+
+#### Example:
+
+
+
+
+{`// Define operation, e.g. stop an index
+IMaintenanceOperation stopIndexOp = new StopIndexOperation("Orders/ByCompany");
+
+// Execute the operation by passing the operation to Maintenance.Send
+documentStore.Maintenance.Send(stopIndexOp);
+
+// This specific operation returns void
+// You can send another operation to verify the index running status
+IMaintenanceOperation indexStatsOp = new GetIndexStatisticsOperation("Orders/ByCompany");
+IndexStats indexStats = documentStore.Maintenance.Send(indexStatsOp);
+IndexRunningStatus status = indexStats.Status; // will be "Paused"
+`}
+
+
+
+
+{`// Define operation, e.g. stop an index
+IMaintenanceOperation stopIndexOp = new StopIndexOperation("Orders/ByCompany");
+
+// Execute the operation by passing the operation to Maintenance.Send
+await documentStore.Maintenance.SendAsync(stopIndexOp);
+
+// This specific operation returns void
+// You can send another operation to verify the index running status
+IMaintenanceOperation indexStatsOp = new GetIndexStatisticsOperation("Orders/ByCompany");
+IndexStats indexStats = await documentStore.Maintenance.SendAsync(indexStatsOp);
+IndexRunningStatus status = indexStats.Status; // will be "Paused"
+`}
+
+
+
+
+##### Syntax:
+
+
+
+
+{`// Available overloads:
+void Send(IMaintenanceOperation operation);
+TResult Send(IMaintenanceOperation operation);
+Operation Send(IMaintenanceOperation operation);
+`}
+
+
+
+
+{`// Available overloads:
+Task SendAsync(IMaintenanceOperation operation,
+ CancellationToken token = default(CancellationToken));
+Task SendAsync(IMaintenanceOperation operation,
+ CancellationToken token = default(CancellationToken));
+Task SendAsync(IMaintenanceOperation operation,
+ CancellationToken token = default(CancellationToken));
+`}
+
+
+
+
+
+
+#### The following maintenance operations are available:
+
+* **Statistics**:
+ [GetStatisticsOperation](../../client-api/operations/maintenance/get-stats.mdx#get-database-stats)
+ [GetDetailedStatisticsOperation](../../client-api/operations/maintenance/get-stats.mdx#get-detailed-database-stats)
+ [GetCollectionStatisticsOperation](../../client-api/operations/maintenance/get-stats.mdx#get-collection-stats)
+ [GetDetailedCollectionStatisticsOperation](../../client-api/operations/maintenance/get-stats.mdx#get-detailed-collection-stats)
+
+* **Client Configuration**:
+ [PutClientConfigurationOperation](../../client-api/operations/maintenance/configuration/put-client-configuration.mdx)
+ [GetClientConfigurationOperation](../../client-api/operations/maintenance/configuration/get-client-configuration.mdx)
+
+* **Indexes**:
+ [PutIndexesOperation](../../client-api/operations/maintenance/indexes/put-indexes.mdx)
+ [SetIndexesLockOperation](../../client-api/operations/maintenance/indexes/set-index-lock.mdx)
+ [SetIndexesPriorityOperation](../../client-api/operations/maintenance/indexes/set-index-priority.mdx)
+ [GetIndexErrorsOperation](../../client-api/operations/maintenance/indexes/get-index-errors.mdx)
+ [GetIndexOperation](../../client-api/operations/maintenance/indexes/get-index.mdx)
+ [GetIndexesOperation](../../client-api/operations/maintenance/indexes/get-indexes.mdx)
+ [GetTermsOperation](../../client-api/operations/maintenance/indexes/get-terms.mdx)
+ GetIndexPerformanceStatisticsOperation
+ GetIndexStatisticsOperation
+ GetIndexesStatisticsOperation
+ GetIndexingStatusOperation
+ GetIndexStalenessOperation
+ [GetIndexNamesOperation](../../client-api/operations/maintenance/indexes/get-index-names.mdx)
+ [StartIndexOperation](../../client-api/operations/maintenance/indexes/start-index.mdx)
+ [StartIndexingOperation](../../client-api/operations/maintenance/indexes/start-indexing.mdx)
+ [StopIndexOperation](../../client-api/operations/maintenance/indexes/stop-index.mdx)
+ [StopIndexingOperation](../../client-api/operations/maintenance/indexes/stop-indexing.mdx)
+ [ResetIndexOperation](../../client-api/operations/maintenance/indexes/reset-index.mdx)
+ [DeleteIndexOperation](../../client-api/operations/maintenance/indexes/delete-index.mdx)
+ [DeleteIndexErrorsOperation](../../client-api/operations/maintenance/indexes/delete-index-errors.mdx)
+ [DisableIndexOperation](../../client-api/operations/maintenance/indexes/disable-index.mdx)
+ [EnableIndexOperation](../../client-api/operations/maintenance/indexes/enable-index.mdx)
+ [IndexHasChangedOperation](../../client-api/operations/maintenance/indexes/index-has-changed.mdx)
+
+* **Analyzers**:
+ [PutAnalyzersOperation](../../indexes/using-analyzers.mdx#add-custom-analyzer-via-client-api)
+ DeleteAnalyzerOperation
+
+* **Ongoing tasks**:
+ [GetOngoingTaskInfoOperation](../../client-api/operations/maintenance/ongoing-tasks/ongoing-task-operations.mdx#get-ongoing-task-info)
+ [ToggleOngoingTaskStateOperation](../../client-api/operations/maintenance/ongoing-tasks/ongoing-task-operations.mdx#toggle-ongoing-task-state)
+ [DeleteOngoingTaskOperation](../../client-api/operations/maintenance/ongoing-tasks/ongoing-task-operations.mdx#delete-ongoing-task)
+
+* **ETL tasks**:
+ AddEtlOperation
+ UpdateEtlOperation
+ [ResetEtlOperation](../../client-api/operations/maintenance/etl/reset-etl.mdx)
+
+* **AI tasks**:
+ [AddEmbeddingsGenerationOperation](../../ai-integration/generating-embeddings/embeddings-generation-task.mdx#configuring-an-embeddings-generation-task---from-the-client-api)
+
+* **Replication tasks**:
+ PutPullReplicationAsHubOperation
+ GetPullReplicationTasksInfoOperation
+ GetReplicationHubAccessOperation
+ GetReplicationPerformanceStatisticsOperation
+ RegisterReplicationHubAccessOperation
+ UnregisterReplicationHubAccessOperation
+ UpdateExternalReplicationOperation
+ UpdatePullReplicationAsSinkOperation
+
+* **Backup**:
+ BackupOperation
+ GetPeriodicBackupStatusOperation
+ StartBackupOperation
+ UpdatePeriodicBackupOperation
+
+* **Connection strings**:
+ [PutConnectionStringOperation](../../client-api/operations/maintenance/connection-strings/add-connection-string.mdx)
+ [RemoveConnectionStringOperation](../../client-api/operations/maintenance/connection-strings/remove-connection-string.mdx)
+ [GetConnectionStringsOperation](../../client-api/operations/maintenance/connection-strings/get-connection-string.mdx)
+
+* **Transaction recording**:
+ StartTransactionsRecordingOperation
+ StopTransactionsRecordingOperation
+ ReplayTransactionsRecordingOperation
+
+* **Database settings**:
+ [PutDatabaseSettingsOperation](../../client-api/operations/maintenance/configuration/database-settings-operation.mdx#put-database-settings-operation)
+ [GetDatabaseSettingsOperation](../../client-api/operations/maintenance/configuration/database-settings-operation.mdx#get-database-settings-operation)
+
+* **Identities**:
+ [GetIdentitiesOperation](../../client-api/operations/maintenance/identities/get-identities.mdx)
+ [NextIdentityForOperation](../../client-api/operations/maintenance/identities/increment-next-identity.mdx)
+ [SeedIdentityForOperation](../../client-api/operations/maintenance/identities/seed-identity.mdx)
+
+* **Time series**:
+ ConfigureTimeSeriesOperation
+ ConfigureTimeSeriesPolicyOperation
+ ConfigureTimeSeriesValueNamesOperation
+ RemoveTimeSeriesPolicyOperation
+
+* **Revisions**:
+ [ConfigureRevisionsOperation](../../document-extensions/revisions/client-api/operations/configure-revisions.mdx)
+ [DeleteRevisionsOperation](../../document-extensions/revisions/client-api/operations/delete-revisions.mdx)
+ [ConfigureRevisionsBinCleanerOperation](../../document-extensions/revisions/revisions-bin-cleaner.mdx#setting-the-revisions-bin-cleaner---from-the-client-api)
+
+* **Sorters**:
+ [PutSortersOperation](../../client-api/operations/maintenance/sorters/put-sorter.mdx)
+ DeleteSorterOperation
+
+* **Sharding**:
+ [AddPrefixedShardingSettingOperation](../../sharding/administration/sharding-by-prefix.mdx#add-prefixes-after-database-creation)
+ [DeletePrefixedShardingSettingOperation](../../sharding/administration/sharding-by-prefix.mdx#removing-prefixes)
+ [UpdatePrefixedShardingSettingOperation](../../sharding/administration/sharding-by-prefix.mdx#updating-shard-configurations-for-prefixes)
+
+* **Misc**:
+ ConfigureExpirationOperation
+ ConfigureRefreshOperation
+ [ConfigureDataArchivalOperation](../../data-archival/enable-data-archiving.mdx#enable-archiving---from-the-client-api)
+ UpdateDocumentsCompressionConfigurationOperation
+ DatabaseHealthCheckOperation
+ GetOperationStateOperation
+ CreateSampleDataOperation
+
+
+
+
+## Server-maintenance operations
+
+* All server-maintenance operations implement the `IServerOperation` interface.
+ The operation is executed within the **server scope**.
+ Use [ForNode](../../client-api/operations/how-to/switch-operations-to-a-different-node.mdx) to operate on a specific node other than the default defined in the client configuration.
+
+* These operations include server management and configuration operations.
+ See all available operations [below](../../client-api/operations/what-are-operations#the-following-server-maintenance-operations-are-available).
+
+* To execute a server-maintenance operation request,
+ use the `Send` method on the `Maintenance.Server` property in the DocumentStore.
+
+#### Example:
+
+
+
+
+{`// Define operation, e.g. get the server build number
+IServerOperation getBuildNumberOp = new GetBuildNumberOperation();
+
+// Execute the operation by passing the operation to Maintenance.Server.Send
+BuildNumber buildNumberResult = documentStore.Maintenance.Server.Send(getBuildNumberOp);
+
+// Access the operation result
+int version = buildNumberResult.BuildVersion;
+`}
+
+
+
+
+{`// Define operation, e.g. get the server build number
+IServerOperation getBuildNumberOp = new GetBuildNumberOperation();
+
+// Execute the operation by passing the operation to Maintenance.Server.Send
+BuildNumber buildNumberResult = await documentStore.Maintenance.Server.SendAsync(getBuildNumberOp);
+
+// Access the operation result
+int version = buildNumberResult.BuildVersion;
+`}
+
+
+
+
+##### Syntax:
+
+
+
+
+{`// Available overloads:
+void Send(IServerOperation operation);
+TResult Send(IServerOperation operation);
+Operation Send(IServerOperation operation);
+`}
+
+
+
+
+{`// Available overloads:
+Task SendAsync(IServerOperation operation,
+ CancellationToken token = default(CancellationToken));
+Task SendAsync(IServerOperation operation,
+ CancellationToken token = default(CancellationToken));
+Task SendAsync(IServerOperation operation,
+ CancellationToken token = default(CancellationToken));
+`}
+
+
+
+
+
+
+#### The following server-maintenance operations are available:
+
+* **Client certificates**:
+ [PutClientCertificateOperation](../../client-api/operations/server-wide/certificates/put-client-certificate.mdx)
+ [CreateClientCertificateOperation](../../client-api/operations/server-wide/certificates/create-client-certificate.mdx)
+ [GetCertificatesOperation](../../client-api/operations/server-wide/certificates/get-certificates.mdx)
+ [DeleteCertificateOperation](../../client-api/operations/server-wide/certificates/delete-certificate.mdx)
+ EditClientCertificateOperation
+ GetCertificateMetadataOperation
+ ReplaceClusterCertificateOperation
+
+* **Server-wide client configuration**:
+ [PutServerWideClientConfigurationOperation](../../client-api/operations/server-wide/configuration/put-serverwide-client-configuration.mdx)
+ [GetServerWideClientConfigurationOperation](../../client-api/operations/server-wide/configuration/get-serverwide-client-configuration.mdx)
+
+* **Database management**:
+ [CreateDatabaseOperation](../../client-api/operations/server-wide/create-database.mdx)
+ [DeleteDatabasesOperation](../../client-api/operations/server-wide/delete-database.mdx)
+ [ToggleDatabasesStateOperation](../../client-api/operations/server-wide/toggle-databases-state.mdx)
+ [GetDatabaseNamesOperation](../../client-api/operations/server-wide/get-database-names.mdx)
+ [AddDatabaseNodeOperation](../../client-api/operations/server-wide/add-database-node.mdx)
+ [PromoteDatabaseNodeOperation](../../client-api/operations/server-wide/promote-database-node.mdx)
+ [ReorderDatabaseMembersOperation](../../client-api/operations/server-wide/reorder-database-members.mdx)
+ [CompactDatabaseOperation](../../client-api/operations/server-wide/compact-database.mdx)
+ GetDatabaseRecordOperation
+ SetDatabasesLockOperation
+ CreateDatabaseOperationWithoutNameValidation
+ SetDatabaseDynamicDistributionOperation
+ ModifyDatabaseTopologyOperation
+ UpdateDatabaseOperation
+ UpdateUnusedDatabasesOperation
+
+* **Server-wide ongoing tasks**:
+ DeleteServerWideTaskOperation
+ ToggleServerWideTaskStateOperation
+
+* **Server-wide replication tasks**:
+ PutServerWideExternalReplicationOperation
+ GetServerWideExternalReplicationOperation
+ GetServerWideExternalReplicationsOperation
+
+* **Server-wide backup tasks**:
+ PutServerWideBackupConfigurationOperation
+ GetServerWideBackupConfigurationOperation
+ GetServerWideBackupConfigurationsOperation
+ RestoreBackupOperation
+
+* **Server-wide analyzers**:
+ [PutServerWideAnalyzersOperation](../../indexes/using-analyzers.mdx#add-custom-analyzer-via-client-api)
+ DeleteServerWideAnalyzerOperation
+
+* **Server-wide sorters**:
+ [PutServerWideSortersOperation](../../client-api/operations/server-wide/sorters/put-sorter-server-wide.mdx)
+ DeleteServerWideSorterOperation
+
+* **Logs & debug**:
+ SetLogsConfigurationOperation
+ GetLogsConfigurationOperation
+ GetClusterDebugInfoPackageOperation
+ [GetBuildNumberOperation](../../client-api/operations/server-wide/get-build-number.mdx)
+ GetServerWideOperationStateOperation
+
+* **Traffic watch**:
+ PutTrafficWatchConfigurationOperation
+ GetTrafficWatchConfigurationOperation
+
+* **Revisions**:
+ [ConfigureRevisionsForConflictsOperation](../../document-extensions/revisions/client-api/operations/conflict-revisions-configuration.mdx)
+
+* **Misc**:
+ ModifyConflictSolverOperation
+ OfflineMigrationOperation
+
+
+
+
+## Manage lengthy operations
+
+* Some operations that run in the server background may take a long time to complete.
+
+* For Operations that implement an interface with type `OperationIdResult`,
+ executing the operation via the `Send` method will return an `Operation` object,
+ which can be **awaited for completion** or **aborted (killed)**.
+#### Wait for completion:
+
+
+
+
+{`public void WaitForCompletionWithTimeout(
+ TimeSpan timeout,
+ DocumentStore documentStore)
+{
+ // Define operation, e.g. delete all discontinued products
+ // Note: This operation implements interface: 'IOperation'
+ IOperation deleteByQueryOp =
+ new DeleteByQueryOperation("from Products where Discontinued = true");
+
+ // Execute the operation
+ // Send returns an 'Operation' object that can be awaited on
+ Operation operation = documentStore.Operations.Send(deleteByQueryOp);
+
+ try
+ {
+ // Call method 'WaitForCompletion' to wait for the operation to complete.
+ // If a timeout is specified, the method will only wait for the specified time frame.
+ BulkOperationResult result =
+ (BulkOperationResult)operation.WaitForCompletion(timeout);
+
+ // The operation has finished within the specified timeframe
+ long numberOfItemsDeleted = result.Total; // Access the operation result
+ }
+ catch (TimeoutException e)
+ {
+ // The operation did not finish within the specified timeframe
+ }
+}
+`}
+
+
+
+
+{`public async Task WaitForCompletionWithTimeoutAsync(
+ TimeSpan timeout,
+ DocumentStore documentStore)
+{
+ // Define operation, e.g. delete all discontinued products
+ // Note: This operation implements interface: 'IOperation'
+ IOperation deleteByQueryOp =
+ new DeleteByQueryOperation("from Products where Discontinued = true");
+
+ // Execute the operation
+ // SendAsync returns an 'Operation' object that can be awaited on
+ Operation operation = await documentStore.Operations.SendAsync(deleteByQueryOp);
+
+ try
+ {
+ // Call method 'WaitForCompletionAsync' to wait for the operation to complete.
+ // If a timeout is specified, the method will only wait for the specified time frame.
+ BulkOperationResult result =
+ await operation.WaitForCompletionAsync(timeout)
+ .ConfigureAwait(false) as BulkOperationResult;
+
+ // The operation has finished within the specified timeframe
+ long numberOfItemsDeleted = result.Total; // Access the operation result
+ }
+ catch (TimeoutException e)
+ {
+ // The operation did Not finish within the specified timeframe
+ }
+}
+`}
+
+
+
+
+{`public void WaitForCompletionWithCancellationToken(
+ CancellationToken token,
+ DocumentStore documentStore)
+{
+ // Define operation, e.g. delete all discontinued products
+ // Note: This operation implements interface: 'IOperation'
+ IOperation deleteByQueryOp =
+ new DeleteByQueryOperation("from Products where Discontinued = true");
+
+ // Execute the operation
+ // Send returns an 'Operation' object that can be awaited on
+ Operation operation = documentStore.Operations.Send(deleteByQueryOp);
+
+ try
+ {
+ // Call method 'WaitForCompletion' to wait for the operation to complete.
+ // Pass a CancellationToken in order to stop waiting upon a cancellation request.
+ BulkOperationResult result =
+ (BulkOperationResult)operation.WaitForCompletion(token);
+
+ // The operation has finished, no cancellation request was made
+ long numberOfItemsDeleted = result.Total; // Access the operation result
+ }
+ catch (TimeoutException e)
+ {
+ // The operation did not finish at cancellation time
+ }
+}
+`}
+
+
+
+
+{`public async Task WaitForCompletionWithCancellationTokenAsync(
+ CancellationToken token,
+ DocumentStore documentStore)
+{
+ // Define operation, e.g. delete all discontinued products
+ // Note: This operation implements interface: 'IOperation'
+ IOperation deleteByQueryOp =
+ new DeleteByQueryOperation("from Products where Discontinued = true");
+
+ // Execute the operation
+ // SendAsync returns an 'Operation' object that can be awaited on
+ Operation operation = await documentStore.Operations.SendAsync(deleteByQueryOp);
+
+ try
+ {
+ // Call method 'WaitForCompletionAsync' to wait for the operation to complete.
+ // Pass a CancellationToken in order to stop waiting upon a cancellation request.
+ BulkOperationResult result =
+ await operation.WaitForCompletionAsync(token)
+ .ConfigureAwait(false) as BulkOperationResult;
+
+ // The operation has finished, no cancellation request was made
+ long numberOfItemsDeleted = result.Total; // Access the operation result
+ }
+ catch (TimeoutException e)
+ {
+ // The operation did Not finish at cancellation time
+ }
+}
+`}
+
+
+
+
+##### Syntax:
+
+
+
+
+{`// Available overloads:
+public IOperationResult WaitForCompletion(TimeSpan? timeout = null)
+public IOperationResult WaitForCompletion(CancellationToken token)
+
+public TResult WaitForCompletion(TimeSpan? timeout = null)
+ where TResult : IOperationResult
+public TResult WaitForCompletion(CancellationToken token)
+ where TResult : IOperationResult
+`}
+
+
+
+
+{`// Available overloads:
+public Task WaitForCompletionAsync(TimeSpan? timeout = null)
+public Task WaitForCompletionAsync(CancellationToken token)
+
+public async Task WaitForCompletionAsync(TimeSpan? timeout = null)
+ where TResult : IOperationResult
+public async Task WaitForCompletionAsync(CancellationToken token)
+ where TResult : IOperationResult
+`}
+
+
+
+
+| Parameter | Type | Description |
+|-------------|---------------------|-------------|
+| **timeout** | `TimeSpan` |
When timespan is specified - The server will throw a `TimeoutException` if the operation has not completed within the specified time frame. The operation itself continues to run in the background, no rollback action takes place.
`null` - WaitForCompletion will wait for the operation to complete forever.
|
+| **token** | `CancellationToken` |
When cancellation token is specified - The server will throw a `TimeoutException` if the operation has not completed at cancellation time. The operation itself continues to run in the background, no rollback action takes place.
|
+
+| Return type | |
+|--------------------|-------------------------------|
+| `IOperationResult` | The operation result content. |
+
+#### Kill operation:
+
+
+
+
+{`// Define operation, e.g. delete all discontinued products
+// Note: This operation implements interface: 'IOperation'
+IOperation deleteByQueryOp =
+ new DeleteByQueryOperation("from Products where Discontinued = true");
+
+// Execute the operation
+// Send returns an 'Operation' object that can be 'killed'
+Operation operation = documentStore.Operations.Send(deleteByQueryOp);
+
+// Call 'Kill' to abort operation
+operation.Kill();
+`}
+
+
+
+
+{`// Define operation, e.g. delete all discontinued products
+// Note: This operation implements interface: 'IOperation'
+IOperation deleteByQueryOp =
+ new DeleteByQueryOperation("from Products where Discontinued = true");
+
+// Execute the operation
+// SendAsync returns an 'Operation' object that can be 'killed'
+Operation operation = await documentStore.Operations.SendAsync(deleteByQueryOp);
+
+// Call 'KillAsync' to abort operation
+await operation.KillAsync();
+
+// Assert that operation is no longer running
+await Assert.ThrowsAsync(() =>
+ operation.WaitForCompletionAsync(TimeSpan.FromSeconds(30)));
+`}
+
+
+
+
+##### Syntax:
+
+
+
+{`// Available overloads:
+public void Kill()
+public async Task KillAsync(CancellationToken token = default)
+`}
+
+
+
+| Parameter | Type | Description |
+|-------------|---------------------|----------------------------------------------------------------------|
+| **token** | `CancellationToken` | Provide a cancellation token if needed to abort the KillAsync method |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/_what-are-operations-java.mdx b/versioned_docs/version-7.1/client-api/operations/_what-are-operations-java.mdx
new file mode 100644
index 0000000000..e73c3f9ba8
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/_what-are-operations-java.mdx
@@ -0,0 +1,203 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+The RavenDB client API is built with the notion of layers. At the top, and what you will usually interact with, are the **[DocumentStore](../../client-api/what-is-a-document-store.mdx)** and the **[DocumentSession](../../client-api/session/what-is-a-session-and-how-does-it-work.mdx)**.
+
+They, in turn, are built on top of the notion of Operations and Commands.
+
+Operations are an encapsulation of a set of low-level commands which are used to manipulate data, execute administrative tasks, and change the configuration on a server.
+
+They are available in the DocumentStore under the **operations**, **maintenance**, and **maintenance().server** methods.
+
+## Common Operations
+
+Common operations include set-based operations for [Patching](../../client-api/operations/patching/set-based.mdx) or removal of documents by using queries (more can be read [here](../../client-api/operations/common/delete-by-query.mdx)).
+There is also the ability to handle distributed [Compare Exchange](../../client-api/operations/compare-exchange/overview.mdx) operations and manage [Attachments](../../client-api/operations/attachments/get-attachment.mdx) and [Counters](../../client-api/operations/counters/counter-batch.mdx).
+
+### How to Send an Operation
+
+In order to execute an operation, you will need to use the `send` or `sendAsync` methods. Available overloads are:
+
+
+
+{`public void send(IVoidOperation operation)
+
+public void send(IVoidOperation operation, SessionInfo sessionInfo)
+
+public TResult send(IOperation operation)
+
+public TResult send(IOperation operation, SessionInfo sessionInfo)
+
+public PatchStatus send(PatchOperation operation, SessionInfo sessionInfo)
+
+public PatchOperation.Result send(Class entityClass, PatchOperation operation, SessionInfo sessionInfo)
+`}
+
+
+
+
+{`public Operation sendAsync(IOperation operation)
+
+public Operation sendAsync(IOperation operation, SessionInfo sessionInfo)
+`}
+
+
+
+
+### The following operations are available:
+
+#### Compare Exchange
+
+* [CompareExchange](../../compare-exchange/overview)
+
+#### Attachments
+
+* [GetAttachmentOperation](../../client-api/operations/attachments/get-attachment.mdx)
+* [PutAttachmentOperation](../../client-api/operations/attachments/put-attachment.mdx)
+* [DeleteAttachmentOperation](../../client-api/operations/attachments/delete-attachment.mdx)
+
+#### Patching
+
+* [PatchByQueryOperation](../../client-api/operations/patching/set-based.mdx)
+* [PatchOperation](../../client-api/operations/patching/single-document.mdx)
+
+
+#### Counters
+
+* [CounterBatchOperation](../../client-api/operations/counters/counter-batch.mdx)
+* [GetCountersOperation](../../client-api/operations/counters/get-counters.mdx)
+
+
+#### Misc
+
+* [DeleteByQueryOperation](../../client-api/operations/common/delete-by-query.mdx)
+
+### Example - Get Attachment
+
+
+
+{`try (CloseableAttachmentResult fetchedAttachment = store
+ .operations()
+ .send(new GetAttachmentOperation("users/1", "file.txt", AttachmentType.DOCUMENT, null))) \{
+ // do stuff with the attachment stream --> fetchedAttachment.data
+\}
+`}
+
+
+
+
+
+## Maintenance Operations
+
+Maintenance operations include operations for changing the configuration at runtime and for management of index operations.
+
+### How to Send an Operation
+
+
+
+{`public void send(IVoidMaintenanceOperation operation)
+
+public TResult send(IMaintenanceOperation operation)
+`}
+
+
+
+### The following maintenance operations are available:
+
+#### Client Configuration
+
+* [PutClientConfigurationOperation](../../client-api/operations/maintenance/configuration/put-client-configuration.mdx)
+* [GetClientConfigurationOperation](../../client-api/operations/maintenance/configuration/get-client-configuration.mdx)
+
+#### Indexing
+
+* [DeleteIndexOperation](../../client-api/operations/maintenance/indexes/delete-index.mdx)
+* [DisableIndexOperation](../../client-api/operations/maintenance/indexes/disable-index.mdx)
+* [EnableIndexOperation](../../client-api/operations/maintenance/indexes/enable-index.mdx)
+* [ResetIndexOperation](../../client-api/operations/maintenance/indexes/reset-index.mdx)
+* [SetIndexesLockOperation](../../client-api/operations/maintenance/indexes/set-index-lock.mdx)
+* [SetIndexesPriorityOperation](../../client-api/operations/maintenance/indexes/set-index-priority.mdx)
+* [StartIndexOperation](../../client-api/operations/maintenance/indexes/start-index.mdx)
+* [StartIndexingOperation](../../client-api/operations/maintenance/indexes/start-indexing.mdx)
+* [StopIndexOperation](../../client-api/operations/maintenance/indexes/stop-index.mdx)
+* [StopIndexingOperation](../../client-api/operations/maintenance/indexes/stop-indexing.mdx)
+* [GetIndexErrorsOperation](../../client-api/operations/maintenance/indexes/get-index-errors.mdx)
+* [GetIndexOperation](../../client-api/operations/maintenance/indexes/get-index.mdx)
+* [GetIndexesOperation](../../client-api/operations/maintenance/indexes/get-indexes.mdx)
+* [GetTermsOperation](../../client-api/operations/maintenance/indexes/get-terms.mdx)
+* [IndexHasChangedOperation](../../client-api/operations/maintenance/indexes/index-has-changed.mdx)
+* [PutIndexesOperation](../../client-api/operations/maintenance/indexes/put-indexes.mdx)
+
+#### Misc
+
+* [GetCollectionStatisticsOperation](../../client-api/operations/maintenance/get-stats.mdx)
+* [GetStatisticsOperation](../../client-api/operations/maintenance/get-stats.mdx)
+* [GetIdentitiesOperation](../../client-api/operations/maintenance/identities/get-identities.mdx)
+
+### Example - Stop Index
+
+
+
+{`store.maintenance().send(new StopIndexOperation("Orders/ByCompany"));
+`}
+
+
+
+
+
+## Server Operations
+
+These types of operations contain various administrative and miscellaneous configuration operations.
+
+### How to Send an Operation
+
+
+
+
+{`public void send(IVoidServerOperation operation)
+
+public TResult send(IServerOperation operation)
+`}
+
+
+
+
+{`public Operation sendAsync(IServerOperation operation)
+`}
+
+
+
+
+### The following server-wide operations are available:
+
+
+#### Cluster Management
+
+* [CreateDatabaseOperation](../../client-api/operations/server-wide/create-database.mdx)
+* [DeleteDatabasesOperation](../../client-api/operations/server-wide/delete-database.mdx)
+
+#### Miscellaneous
+
+* [GetDatabaseNamesOperation](../../client-api/operations/server-wide/get-database-names.mdx)
+
+### Example - Get Build Number
+
+
+
+{`GetClientConfigurationOperation.Result result
+ = store.maintenance().send(new GetClientConfigurationOperation());
+`}
+
+
+
+
+
+## Remarks
+
+
+By default, operations available in `store.operations` or `store.maintenance` are working on a default database that was setup for that store. To switch operations to a different database that is available on that server use the **[forDatabase](../../client-api/operations/how-to/switch-operations-to-a-different-database.mdx)** method.
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/_what-are-operations-nodejs.mdx b/versioned_docs/version-7.1/client-api/operations/_what-are-operations-nodejs.mdx
new file mode 100644
index 0000000000..875d0f0fa4
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/_what-are-operations-nodejs.mdx
@@ -0,0 +1,511 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* The RavenDB Client API is built with the notion of layers.
+ At the top, and what you will usually interact with, are the **[DocumentStore](../../client-api/what-is-a-document-store.mdx)**
+ and the **[Session](../../client-api/session/what-is-a-session-and-how-does-it-work.mdx)**.
+ They, in turn, are built on top of the lower-level **Operations** and **Commands** API.
+
+* **RavenDB provides direct access to this lower-level API**, allowing you to send requests
+ directly to the server via DocumentStore Operations instead of using the higher-level Session API.
+
+* In this page:
+ * [Why use operations](../../client-api/operations/what-are-operations.mdx#why-use-operations)
+ * [How operations work](../../client-api/operations/what-are-operations.mdx#how-operations-work)
+ * __Operation types__:
+ * [Common operations](../../client-api/operations/what-are-operations.mdx#common-operations)
+ * [Maintenance operations](../../client-api/operations/what-are-operations.mdx#maintenance-operations)
+ * [Server-maintenance operations](../../client-api/operations/what-are-operations.mdx#server-maintenance-operations)
+ * [Manage lengthy operations](../../client-api/operations/what-are-operations.mdx#manage-lengthy-operations)
+ * [Wait for completion](../../client-api/operations/what-are-operations.mdx#wait-for-completion)
+ * [Kill operation](../../client-api/operations/what-are-operations.mdx#killoperation)
+
+
+## Why use operations
+
+* Operations provide __management functionality__ that is not available in the context of the session, for example:
+ * Create/delete a database
+ * Execute administrative tasks
+ * Assign permissions
+ * Change server configuration, and more.
+
+* The operations are executed on the DocumentStore and are not part of the session transaction.
+
+* There are some client tasks, such as patching documents, that can be carried out either via the Session ([session.advanced.patch()](../../client-api/operations/patching/single-document.mdx#array-manipulation))
+ or via an Operation on the DocumentStore ([PatchOperation](../../client-api/operations/patching/single-document.mdx#operations-api)).
+
+
+
+## How operations work
+
+* __Sending the request__:
+ Each Operation creates an HTTP request message to be sent to the relevant server endpoint.
+ The DocumentStore `OperationExecutor` sends the request and processes the results.
+* __Target node__:
+ By default, the operation will be executed on the server node that is defined by the [client configuration](../../client-api/configuration/load-balance/overview.mdx#client-logic-for-choosing-a-node).
+ However, server-maintenance operations can be executed on a specific node by using the [forNode](../../client-api/operations/how-to/switch-operations-to-a-different-node.mdx) method.
+* __Target database__:
+ By default, operations work on the default database defined in the DocumentStore.
+ However, common operations & maintenance operations can operate on a different database by using the [forDatabase](../../client-api/operations/how-to/switch-operations-to-a-different-database.mdx) method.
+* __Transaction scope__:
+ Operations execute as a single-node transaction.
+ If needed, data will then replicate to the other nodes in the database-group.
+* __Background operations__:
+ Some operations may take a long time to complete and can be awaited for completion.
+ Learn more [below](../../client-api/operations/what-are-operations.mdx#wait-for-completion).
+
+
+
+## Common operations
+
+
+
+* All common operations implement the `IOperation` interface.
+ The operation is executed within the __database scope__.
+ Use [forDatabase](../../client-api/operations/how-to/switch-operations-to-a-different-database.mdx) to operate on a specific database other than the default defined in the store.
+
+* These operations include set-based operations such as _PatchOperation_, _CounterBatchOperation_,
+ document-extensions related operations such as getting/putting an attachment, and more.
+ See all available operations [below](../../client-api/operations/what-are-operations#the-following-common-operations-are-available).
+
+* To execute a common operation request,
+ use the `send` method on the `operations` property of the DocumentStore.
+
+__Example__:
+
+
+
+{`// Define operation, e.g. get all counters info for a document
+const getCountersOp = new GetCountersOperation("products/1-A");
+
+// Execute the operation by passing the operation to operations.send
+const allCountersResult = await documentStore.operations.send(getCountersOp);
+
+// Access the operation result
+const numberOfCounters = allCountersResult.counters.length;
+`}
+
+
+
+
+
+
+
+__Send syntax__:
+
+
+
+{`// Available overloads:
+await send(operation);
+await send(operation, sessionInfo);
+await send(operation, sessionInfo, documentType);
+
+await send(patchOperaton);
+await send(patchOperation, sessionInfo);
+await send(patchOperation, sessionInfo, resultType);
+`}
+
+
+
+
+
+
+
+#### The following common operations are available:
+
+* __Attachments__:
+ [PutAttachmentOperation](../../client-api/operations/attachments/put-attachment.mdx)
+ [GetAttachmentOperation](../../client-api/operations/attachments/get-attachment.mdx)
+ [DeleteAttachmentOperation](../../client-api/operations/attachments/delete-attachment.mdx)
+
+* __Counters__:
+ [CounterBatchOperation](../../client-api/operations/counters/counter-batch.mdx)
+ [GetCountersOperation](../../client-api/operations/counters/get-counters.mdx)
+
+* __Time series__:
+ TimeSeriesBatchOperation
+ GetMultipleTimeSeriesOperation
+ GetTimeSeriesOperation
+ GetTimeSeriesStatisticsOperation
+
+* __Revisions__:
+ [GetRevisionsOperation](../../document-extensions/revisions/client-api/operations/get-revisions.mdx)
+
+* __Patching__:
+ [PatchOperation](../../client-api/operations/patching/single-document.mdx)
+ [PatchByQueryOperation](../../client-api/operations/patching/set-based.mdx)
+
+* __Delete by query__:
+ [DeleteByQueryOperation](../../client-api/operations/common/delete-by-query.mdx)
+
+* __Compare-exchange__:
+ [PutCompareExchangeValueOperation](../../compare-exchange/create-cmpxchg-items#create-item-using-a-store-operation)
+ [GetCompareExchangeValueOperation](../../compare-exchange/get-cmpxchg-item#get-item-using-a-store-operation)
+ [GetCompareExchangeValuesOperation](../../compare-exchange/get-cmpxchg-items)
+ [DeleteCompareExchangeValueOperation](../../compare-exchange/delete-cmpxchg-items#delete-compare-exchange-item-using-a-store-operation)
+
+
+
+
+## Maintenance operations
+
+
+
+* All maintenance operations implement the `IMaintenanceOperation` interface.
+ The operation is executed within the __database scope__.
+ Use [forDatabase](../../client-api/operations/how-to/switch-operations-to-a-different-database.mdx) to operate on a specific database other than the default defined in the store.
+
+* These operations include database management operations such as setting client configuration,
+ managing indexes & ongoing-tasks operations, getting stats, and more.
+ See all available maintenance operations [below](../../client-api/operations/what-are-operations#the-following-maintenance-operations-are-available).
+
+* To execute a maintenance operation request,
+ use the `send` method on the `maintenance` property in the DocumentStore.
+
+__Example__:
+
+
+
+{`// Define operation, e.g. stop an index
+const stopIndexOp = new StopIndexOperation("Orders/ByCompany");
+
+// Execute the operation by passing the operation to maintenance.send
+await documentStore.maintenance.send(stopIndexOp);
+
+// This specific operation returns void
+// You can send another operation to verify the index running status
+const indexStatsOp = new GetIndexStatisticsOperation("Orders/ByCompany");
+const indexStats = await documentStore.maintenance.send(indexStatsOp);
+const status = indexStats.status; // will be "Paused"
+`}
+
+
+
+
+
+
+
+__Send syntax__:
+
+
+
+{`await send(operation);
+`}
+
+
+
+
+
+
+
+#### The following maintenance operations are available:
+
+* __Statistics__:
+ [GetStatisticsOperation](../../client-api/operations/maintenance/get-stats.mdx#get-database-stats)
+ [GetDetailedStatisticsOperation](../../client-api/operations/maintenance/get-stats.mdx#get-detailed-database-stats)
+ [GetCollectionStatisticsOperation](../../client-api/operations/maintenance/get-stats.mdx#get-collection-stats)
+ [GetDetailedCollectionStatisticsOperation](../../client-api/operations/maintenance/get-stats.mdx#get-detailed-collection-stats)
+
+* __Client Configuration__:
+ [PutClientConfigurationOperation](../../client-api/operations/maintenance/configuration/put-client-configuration.mdx)
+ [GetClientConfigurationOperation](../../client-api/operations/maintenance/configuration/get-client-configuration.mdx)
+
+* __Indexes__:
+ [PutIndexesOperation](../../client-api/operations/maintenance/indexes/put-indexes.mdx)
+ [SetIndexesLockOperation](../../client-api/operations/maintenance/indexes/set-index-lock.mdx)
+ [SetIndexesPriorityOperation](../../client-api/operations/maintenance/indexes/set-index-priority.mdx)
+ [GetIndexErrorsOperation](../../client-api/operations/maintenance/indexes/get-index-errors.mdx)
+ [GetIndexOperation](../../client-api/operations/maintenance/indexes/get-index.mdx)
+ [GetIndexesOperation](../../client-api/operations/maintenance/indexes/get-indexes.mdx)
+ [GetTermsOperation](../../client-api/operations/maintenance/indexes/get-terms.mdx)
+ GetIndexPerformanceStatisticsOperation
+ GetIndexStatisticsOperation
+ GetIndexesStatisticsOperation
+ GetIndexingStatusOperation
+ GetIndexStalenessOperation
+ [GetIndexNamesOperation](../../client-api/operations/maintenance/indexes/get-index-names.mdx)
+ [StartIndexOperation](../../client-api/operations/maintenance/indexes/start-index.mdx)
+ [StartIndexingOperation](../../client-api/operations/maintenance/indexes/start-indexing.mdx)
+ [StopIndexOperation](../../client-api/operations/maintenance/indexes/stop-index.mdx)
+ [StopIndexingOperation](../../client-api/operations/maintenance/indexes/stop-indexing.mdx)
+ [ResetIndexOperation](../../client-api/operations/maintenance/indexes/reset-index.mdx)
+ [DeleteIndexOperation](../../client-api/operations/maintenance/indexes/delete-index.mdx)
+ [DeleteIndexErrorsOperation](../../client-api/operations/maintenance/indexes/delete-index-errors.mdx)
+ [DisableIndexOperation](../../client-api/operations/maintenance/indexes/disable-index.mdx)
+ [EnableIndexOperation](../../client-api/operations/maintenance/indexes/enable-index.mdx)
+ [IndexHasChangedOperation](../../client-api/operations/maintenance/indexes/index-has-changed.mdx)
+
+* __Analyzers__:
+ [PutAnalyzersOperation](../../indexes/using-analyzers.mdx#add-custom-analyzer-via-client-api)
+ DeleteAnalyzerOperation
+
+* **Ongoing tasks**:
+ [GetOngoingTaskInfoOperation](../../client-api/operations/maintenance/ongoing-tasks/ongoing-task-operations.mdx#get-ongoing-task-info)
+ [ToggleOngoingTaskStateOperation](../../client-api/operations/maintenance/ongoing-tasks/ongoing-task-operations.mdx#toggle-ongoing-task-state)
+ [DeleteOngoingTaskOperation](../../client-api/operations/maintenance/ongoing-tasks/ongoing-task-operations.mdx#delete-ongoing-task)
+
+* __ETL tasks__:
+ AddEtlOperation
+ UpdateEtlOperation
+ [ResetEtlOperation](../../client-api/operations/maintenance/etl/reset-etl.mdx)
+
+* __Replication tasks__:
+ PutPullReplicationAsHubOperation
+ GetPullReplicationTasksInfoOperation
+ GetReplicationHubAccessOperation
+ GetReplicationPerformanceStatisticsOperation
+ RegisterReplicationHubAccessOperation
+ UnregisterReplicationHubAccessOperation
+ UpdateExternalReplicationOperation
+ UpdatePullReplicationAsSinkOperation
+
+* __Backup__:
+ BackupOperation
+ GetPeriodicBackupStatusOperation
+ StartBackupOperation
+ UpdatePeriodicBackupOperation
+
+* __Connection strings__:
+ [PutConnectionStringOperation](../../client-api/operations/maintenance/connection-strings/add-connection-string.mdx)
+ [RemoveConnectionStringOperation](../../client-api/operations/maintenance/connection-strings/remove-connection-string.mdx)
+ [GetConnectionStringsOperation](../../client-api/operations/maintenance/connection-strings/get-connection-string.mdx)
+
+* __Transaction recording__:
+ StartTransactionsRecordingOperation
+ StopTransactionsRecordingOperation
+ ReplayTransactionsRecordingOperation
+
+* __Database settings__:
+ [PutDatabaseSettingsOperation](../../client-api/operations/maintenance/configuration/database-settings-operation.mdx#put-database-settings-operation)
+ [GetDatabaseSettingsOperation](../../client-api/operations/maintenance/configuration/database-settings-operation.mdx#get-database-settings-operation)
+
+* __Identities__:
+ [GetIdentitiesOperation](../../client-api/operations/maintenance/identities/get-identities.mdx)
+ [NextIdentityForOperation](../../client-api/operations/maintenance/identities/increment-next-identity.mdx)
+ [SeedIdentityForOperation](../../client-api/operations/maintenance/identities/seed-identity.mdx)
+
+* __Time series__:
+ ConfigureTimeSeriesOperation
+ ConfigureTimeSeriesPolicyOperation
+ ConfigureTimeSeriesValueNamesOperation
+ RemoveTimeSeriesPolicyOperation
+
+* __Revisions__:
+ [ConfigureRevisionsOperation](../../document-extensions/revisions/client-api/operations/configure-revisions.mdx)
+
+* __Sorters__:
+ [PutSortersOperation](../../client-api/operations/maintenance/sorters/put-sorter.mdx)
+ DeleteSorterOperation
+
+* **Sharding**:
+ [AddPrefixedShardingSettingOperation](../../sharding/administration/sharding-by-prefix.mdx#add-prefixes-after-database-creation)
+ [DeletePrefixedShardingSettingOperation](../../sharding/administration/sharding-by-prefix.mdx#removing-prefixes)
+ [UpdatePrefixedShardingSettingOperation](../../sharding/administration/sharding-by-prefix.mdx#updating-shard-configurations-for-prefixes)
+
+* __Misc__:
+ ConfigureExpirationOperation
+ ConfigureRefreshOperation
+ ConfigureDataArchivalOperation
+ UpdateDocumentsCompressionConfigurationOperation
+ DatabaseHealthCheckOperation
+ GetOperationStateOperation
+ CreateSampleDataOperation
+
+
+
+
+## Server-maintenance operations
+
+
+
+* All server-maintenance operations implement the `IServerOperation` interface.
+ The operation is executed within the __server scope__.
+ Use [forNode](../../client-api/operations/how-to/switch-operations-to-a-different-node.mdx) to operate on a specific node other than the default defined in the client configuration.
+
+* These operations include server management and configuration operations.
+ See all available operations [below](../../client-api/operations/what-are-operations#the-following-server-maintenance-operations-are-available).
+
+* To execute a server-maintenance operation request,
+ use the `send` method on the `maintenance.server` property of the DocumentStore.
+
+__Example__:
+
+
+
+{`// Define operation, e.g. get the server build number
+const getBuildNumberOp = new GetBuildNumberOperation();
+
+// Execute the operation by passing the operation to maintenance.server.send
+const buildNumberResult = await documentStore.maintenance.server.send(getBuildNumberOp);
+
+// Access the operation result
+const version = buildNumberResult.buildVersion;
+`}
+
+
+
+
+
+
+
+__Send syntax__:
+
+
+
+{`await send(operation);
+`}
+
+
+
+
+
+
+
+#### The following server-maintenance operations are available:
+
+* __Client certificates__:
+ [PutClientCertificateOperation](../../client-api/operations/server-wide/certificates/put-client-certificate.mdx)
+ [CreateClientCertificateOperation](../../client-api/operations/server-wide/certificates/create-client-certificate.mdx)
+ [GetCertificatesOperation](../../client-api/operations/server-wide/certificates/get-certificates.mdx)
+ [DeleteCertificateOperation](../../client-api/operations/server-wide/certificates/delete-certificate.mdx)
+ EditClientCertificateOperation
+ GetCertificateMetadataOperation
+ ReplaceClusterCertificateOperation
+
+* __Server-wide client configuration__:
+ [PutServerWideClientConfigurationOperation](../../client-api/operations/server-wide/configuration/put-serverwide-client-configuration.mdx)
+ [GetServerWideClientConfigurationOperation](../../client-api/operations/server-wide/configuration/get-serverwide-client-configuration.mdx)
+
+* __Database management__:
+ [CreateDatabaseOperation](../../client-api/operations/server-wide/create-database.mdx)
+ [DeleteDatabasesOperation](../../client-api/operations/server-wide/delete-database.mdx)
+ [ToggleDatabasesStateOperation](../../client-api/operations/server-wide/toggle-databases-state.mdx)
+ [GetDatabaseNamesOperation](../../client-api/operations/server-wide/get-database-names.mdx)
+ [AddDatabaseNodeOperation](../../client-api/operations/server-wide/add-database-node.mdx)
+ [PromoteDatabaseNodeOperation](../../client-api/operations/server-wide/promote-database-node.mdx)
+ [ReorderDatabaseMembersOperation](../../client-api/operations/server-wide/reorder-database-members.mdx)
+ [CompactDatabaseOperation](../../client-api/operations/server-wide/compact-database.mdx)
+ GetDatabaseRecordOperation
+ SetDatabasesLockOperation
+ CreateDatabaseOperationWithoutNameValidation
+ SetDatabaseDynamicDistributionOperation
+ ModifyDatabaseTopologyOperation
+ UpdateDatabaseOperation
+ UpdateUnusedDatabasesOperation
+
+* __Server-wide ongoing tasks__:
+ DeleteServerWideTaskOperation
+ ToggleServerWideTaskStateOperation
+
+* __Server-wide replication tasks__:
+ PutServerWideExternalReplicationOperation
+ GetServerWideExternalReplicationOperation
+ GetServerWideExternalReplicationsOperation
+
+* __Server-wide backup tasks__:
+ PutServerWideBackupConfigurationOperation
+ GetServerWideBackupConfigurationOperation
+ GetServerWideBackupConfigurationsOperation
+ RestoreBackupOperation
+
+* __Server-wide analyzers__:
+ [PutServerWideAnalyzersOperation](../../indexes/using-analyzers.mdx#add-custom-analyzer-via-client-api)
+ DeleteServerWideAnalyzerOperation
+
+* __Server-wide sorters__:
+ [PutServerWideSortersOperation](../../client-api/operations/server-wide/sorters/put-sorter-server-wide.mdx)
+ DeleteServerWideSorterOperation
+
+* __Logs & debug__:
+ SetLogsConfigurationOperation
+ GetLogsConfigurationOperation
+ GetClusterDebugInfoPackageOperation
+ [GetBuildNumberOperation](../../client-api/operations/server-wide/get-build-number.mdx)
+ GetServerWideOperationStateOperation
+
+* __Traffic watch__:
+ PutTrafficWatchConfigurationOperation
+ GetTrafficWatchConfigurationOperation
+
+* __Revisions__:
+ [ConfigureRevisionsForConflictsOperation](../../document-extensions/revisions/client-api/operations/conflict-revisions-configuration.mdx)
+
+* __Misc__:
+ ModifyConflictSolverOperation
+ OfflineMigrationOperation
+
+
+
+
+## Manage lengthy operations
+
+* Some operations that run in the server background may take a long time to complete.
+
+* For Operations that implement an interface with type `OperationIdResult`,
+ executing the operation via the `send` method will return a promise for `OperationCompletionAwaiter` object,
+ which can then be __awaited for completion__ or __aborted (killed)__.
+
+
+ __Wait for completion__:
+
+
+
+{`// Define operation, e.g. delete all discontinued products
+// Note: This operation implements interface: 'IOperation'
+const deleteByQueryOp = new DeleteByQueryOperation("from Products where Discontinued = true");
+
+// Execute the operation
+// 'send' returns an object that can be awaited on
+const asyncOperation = await documentStore.operations.send(deleteByQueryOp);
+
+// Call method 'waitForCompletion' to wait for the operation to complete
+await asyncOperation.waitForCompletion();
+`}
+
+
+
+
+
+
+
+ __Kill operation__:
+
+
+
+{`// Define operation, e.g. delete all discontinued products
+// Note: This operation implements interface: 'IOperation'
+const deleteByQueryOp = new DeleteByQueryOperation("from Products where Discontinued = true");
+
+// Execute the operation
+// 'send' returns an object that can be 'killed'
+const asyncOperation = await documentStore.operations.send(deleteByQueryOp);
+
+// Call method 'kill' to abort operation
+await asyncOperation.kill();
+`}
+
+
+
+
+
+
+
+##### Syntax:
+
+
+
+{`await waitForCompletion();
+await kill();
+`}
+
+
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/_what-are-operations-php.mdx b/versioned_docs/version-7.1/client-api/operations/_what-are-operations-php.mdx
new file mode 100644
index 0000000000..71f6abebc2
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/_what-are-operations-php.mdx
@@ -0,0 +1,493 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* The RavenDB Client API is built with the notion of layers.
+ At the top, and what you will usually interact with, are the **[documentStore](../../client-api/what-is-a-document-store.mdx)**
+ and the **[session](../../client-api/session/what-is-a-session-and-how-does-it-work.mdx)**.
+ They, in turn, are built on top of the lower-level **Operations** and **Commands** API.
+
+* **RavenDB provides direct access to this lower-level API**, allowing you to send requests
+ directly to the server via DocumentStore Operations instead of using the higher-level Session API.
+
+* In this page:
+ * [Why use operations](../../client-api/operations/what-are-operations.mdx#why-use-operations)
+ * [How operations work](../../client-api/operations/what-are-operations.mdx#how-operations-work)
+ * **Operation types**:
+ * [Common operations](../../client-api/operations/what-are-operations.mdx#common-operations)
+ * [Maintenance operations](../../client-api/operations/what-are-operations.mdx#maintenance-operations)
+ * [Server-maintenance operations](../../client-api/operations/what-are-operations.mdx#server-maintenance-operations)
+ * [Manage lengthy operations](../../client-api/operations/what-are-operations.mdx#manage-lengthy-operations)
+ * [Wait for completion](../../client-api/operations/what-are-operations.mdx#wait-for-completion)
+
+
+## Why use operations
+
+* Operations provide **management functionality** that is not available in the context of the session, for example:
+ * Create/delete a database
+ * Execute administrative tasks
+ * Assign permissions
+ * Change server configuration, and more.
+
+* The operations are executed on the DocumentStore and are not part of the session transaction.
+
+* There are some client tasks, such as patching documents, that can be carried out either via the Session
+ ([session.advanced.patch()](../../client-api/operations/patching/single-document.mdx#array-manipulation))
+ or via an Operation on the DocumentStore ([PatchOperation](../../client-api/operations/patching/single-document.mdx#operations-api)).
+
+
+
+## How operations work
+
+* **Sending the request**:
+ Each Operation is an encapsulation of a `RavenCommand`.
+ The RavenCommand creates the HTTP request message to be sent to the relevant server endpoint.
+ The DocumentStore `OperationExecutor` sends the request and processes the results.
+* **Target node**:
+ By default, the operation will be executed on the server node that is defined by the [client configuration](../../client-api/configuration/load-balance/overview.mdx#client-logic-for-choosing-a-node).
+ However, server-maintenance operations can be executed on a specific node by using the [forNode](../../client-api/operations/how-to/switch-operations-to-a-different-node.mdx) method.
+* **Target database**:
+ By default, operations work on the default database defined in the DocumentStore.
+ However, common operations & maintenance operations can operate on a different database by using the [forDatabase](../../client-api/operations/how-to/switch-operations-to-a-different-database.mdx) method.
+* **Transaction scope**:
+ Operations execute as a single-node transaction.
+ If needed, data will then replicate to the other nodes in the database-group.
+* **Background operations**:
+ Some operations may take a long time to complete and can be awaited for completion.
+ Learn more [below](../../client-api/operations/what-are-operations.mdx#wait-for-completion).
+
+
+
+## Common operations
+
+* All common operations implement the `IOperation` interface.
+ The operation is executed within the **database scope**.
+ Use [forDatabase](../../client-api/operations/how-to/switch-operations-to-a-different-database.mdx) to operate on a specific database other than the default defined in the store.
+
+* These operations include set-based operations such as _PatchOperation_, _CounterBatchOperation_,
+ document-extensions related operations such as getting/putting an attachment, and more.
+ See all available operations [below](../../client-api/operations/what-are-operations#the-following-common-operations-are-available).
+
+* To execute a common operation request,
+ use the `send` method on the `operations` property in the DocumentStore.
+
+#### Example:
+
+
+
+{`// Define operation, e.g. get all counters info for a document
+$getCountersOp = new GetCountersOperation("products/1-A");
+
+// Execute the operation by passing the operation to Operations.Send
+/** @var CountersDetail $allCountersResult */
+$allCountersResult = $documentStore->operations()->send($getCountersOp);
+
+// Access the operation result
+$numberOfCounters = count($allCountersResult->getCounters());
+`}
+
+
+
+##### Syntax:
+
+
+
+{`/**
+ * Usage and available overloads:
+ *
+ * - send(?OperationInterface $operation, ?SessionInfo $sessionInfo = null): ResultInterface;
+ * - send(string $entityClass, ?PatchOperation $operation, ?SessionInfo $sessionInfo = null): PatchOperationResult;
+ * - send(?PatchOperation $operation, ?SessionInfo $sessionInfo = null): PatchStatus;
+ *
+ * @param mixed ...$parameters
+ */
+public function send(...$parameters);
+`}
+
+
+
+
+
+#### The following common operations are available:
+
+* **Attachments**:
+ [PutAttachmentOperation](../../client-api/operations/attachments/put-attachment.mdx)
+ [GetAttachmentOperation](../../client-api/operations/attachments/get-attachment.mdx)
+ [DeleteAttachmentOperation](../../client-api/operations/attachments/delete-attachment.mdx)
+
+* **Counters**:
+ [CounterBatchOperation](../../client-api/operations/counters/counter-batch.mdx)
+ [GetCountersOperation](../../client-api/operations/counters/get-counters.mdx)
+
+* **Time series**:
+ [TimeSeriesBatchOperation](../../document-extensions/timeseries/client-api/operations/append-and-delete.mdx)
+ [GetMultipleTimeSeriesOperation](../../document-extensions/timeseries/client-api/operations/get.mdx)
+ [GetTimeSeriesOperation](../../document-extensions/timeseries/client-api/operations/get.mdx)
+ GetTimeSeriesStatisticsOperation
+
+* **Revisions**:
+ [GetRevisionsOperation](../../document-extensions/revisions/client-api/operations/get-revisions.mdx)
+
+* **Patching**:
+ [PatchOperation](../../client-api/operations/patching/single-document.mdx)
+ [PatchByQueryOperation](../../client-api/operations/patching/set-based.mdx)
+
+* **Delete by query**:
+ [DeleteByQueryOperation](../../client-api/operations/common/delete-by-query.mdx)
+
+* **Compare-exchange**:
+ PutCompareExchangeValueOperation
+ GetCompareExchangeValueOperation
+ [GetCompareExchangeValuesOperation](../../compare-exchange/get-cmpxchg-items)
+ DeleteCompareExchangeValueOperation
+
+
+
+
+## Maintenance operations
+
+* All maintenance operations implement the `IMaintenanceOperation` interface.
+ The operation is executed within the **database scope**.
+ Use [forDatabase](../../client-api/operations/how-to/switch-operations-to-a-different-database.mdx) to operate on a specific database other than the default defined in the store.
+
+* These operations include database management operations such as setting client configuration,
+ managing indexes & ongoing-tasks operations, getting stats, and more.
+ See all available maintenance operations [below](../../client-api/operations/what-are-operations#the-following-maintenance-operations-are-available).
+
+* To execute a maintenance operation request,
+ use the `send` method on the `maintenance` property in the DocumentStore.
+
+#### Example:
+
+
+
+{`// Define operation, e.g. stop an index
+$stopIndexOp = new StopIndexOperation("Orders/ByCompany");
+
+// Execute the operation by passing the operation to Maintenance.Send
+$documentStore->maintenance()->send($stopIndexOp);
+
+// This specific operation returns void
+// You can send another operation to verify the index running status
+$indexStatsOp = new GetIndexStatisticsOperation("Orders/ByCompany");
+/** @var IndexStats $indexStats */
+$indexStats = $documentStore->maintenance()->send($indexStatsOp);
+
+/** @var IndexRunningStatus $status */
+$status = $indexStats->getStatus(); // will be "Paused"
+`}
+
+
+
+##### Syntax:
+
+
+
+{`public function send(MaintenanceOperationInterface $operation): ResultInterface;
+`}
+
+
+
+
+
+#### The following maintenance operations are available:
+
+* **Statistics**:
+ [GetStatisticsOperation](../../client-api/operations/maintenance/get-stats.mdx#get-database-stats)
+ [GetDetailedStatisticsOperation](../../client-api/operations/maintenance/get-stats.mdx#get-detailed-database-stats)
+ [GetCollectionStatisticsOperation](../../client-api/operations/maintenance/get-stats.mdx#get-collection-stats)
+ [GetDetailedCollectionStatisticsOperation](../../client-api/operations/maintenance/get-stats.mdx#get-detailed-collection-stats)
+
+* **Client Configuration**:
+ [PutClientConfigurationOperation](../../client-api/operations/maintenance/configuration/put-client-configuration.mdx)
+ [GetClientConfigurationOperation](../../client-api/operations/maintenance/configuration/get-client-configuration.mdx)
+
+* **Indexes**:
+ [PutIndexesOperation](../../client-api/operations/maintenance/indexes/put-indexes.mdx)
+ [SetIndexesLockOperation](../../client-api/operations/maintenance/indexes/set-index-lock.mdx)
+ [SetIndexesPriorityOperation](../../client-api/operations/maintenance/indexes/set-index-priority.mdx)
+ [GetIndexErrorsOperation](../../client-api/operations/maintenance/indexes/get-index-errors.mdx)
+ [GetIndexOperation](../../client-api/operations/maintenance/indexes/get-index.mdx)
+ [GetIndexesOperation](../../client-api/operations/maintenance/indexes/get-indexes.mdx)
+ [GetTermsOperation](../../client-api/operations/maintenance/indexes/get-terms.mdx)
+ GetIndexPerformanceStatisticsOperation
+ GetIndexStatisticsOperation
+ GetIndexesStatisticsOperation
+ GetIndexingStatusOperation
+ GetIndexStalenessOperation
+ [GetIndexNamesOperation](../../client-api/operations/maintenance/indexes/get-index-names.mdx)
+ [StartIndexOperation](../../client-api/operations/maintenance/indexes/start-index.mdx)
+ [StartIndexingOperation](../../client-api/operations/maintenance/indexes/start-indexing.mdx)
+ [StopIndexOperation](../../client-api/operations/maintenance/indexes/stop-index.mdx)
+ [StopIndexingOperation](../../client-api/operations/maintenance/indexes/stop-indexing.mdx)
+ [ResetIndexOperation](../../client-api/operations/maintenance/indexes/reset-index.mdx)
+ [DeleteIndexOperation](../../client-api/operations/maintenance/indexes/delete-index.mdx)
+ [DeleteIndexErrorsOperation](../../client-api/operations/maintenance/indexes/delete-index-errors.mdx)
+ [DisableIndexOperation](../../client-api/operations/maintenance/indexes/disable-index.mdx)
+ [EnableIndexOperation](../../client-api/operations/maintenance/indexes/enable-index.mdx)
+ [IndexHasChangedOperation](../../client-api/operations/maintenance/indexes/index-has-changed.mdx)
+
+* **Analyzers**:
+ PutAnalyzersOperation
+ DeleteAnalyzerOperation
+
+* **Ongoing tasks**:
+ [GetOngoingTaskInfoOperation](../../client-api/operations/maintenance/ongoing-tasks/ongoing-task-operations.mdx#get-ongoing-task-info)
+ [ToggleOngoingTaskStateOperation](../../client-api/operations/maintenance/ongoing-tasks/ongoing-task-operations.mdx#toggle-ongoing-task-state)
+ [DeleteOngoingTaskOperation](../../client-api/operations/maintenance/ongoing-tasks/ongoing-task-operations.mdx#delete-ongoing-task)
+
+* **ETL tasks**:
+ AddEtlOperation
+ UpdateEtlOperation
+ [ResetEtlOperation](../../client-api/operations/maintenance/etl/reset-etl.mdx)
+
+* **Replication tasks**:
+ PutPullReplicationAsHubOperation
+ GetPullReplicationTasksInfoOperation
+ GetReplicationHubAccessOperation
+ GetReplicationPerformanceStatisticsOperation
+ RegisterReplicationHubAccessOperation
+ UnregisterReplicationHubAccessOperation
+ UpdateExternalReplicationOperation
+ UpdatePullReplicationAsSinkOperation
+
+* **Backup**:
+ BackupOperation
+ GetPeriodicBackupStatusOperation
+ StartBackupOperation
+ UpdatePeriodicBackupOperation
+
+* **Connection strings**:
+ [PutConnectionStringOperation](../../client-api/operations/maintenance/connection-strings/add-connection-string.mdx)
+ [RemoveConnectionStringOperation](../../client-api/operations/maintenance/connection-strings/remove-connection-string.mdx)
+ [GetConnectionStringsOperation](../../client-api/operations/maintenance/connection-strings/get-connection-string.mdx)
+
+* **Transaction recording**:
+ StartTransactionsRecordingOperation
+ StopTransactionsRecordingOperation
+ ReplayTransactionsRecordingOperation
+
+* **Database settings**:
+ [PutDatabaseSettingsOperation](../../client-api/operations/maintenance/configuration/database-settings-operation.mdx#put-database-settings-operation)
+ [GetDatabaseSettingsOperation](../../client-api/operations/maintenance/configuration/database-settings-operation.mdx#get-database-settings-operation)
+
+* **Identities**:
+ [GetIdentitiesOperation](../../client-api/operations/maintenance/identities/get-identities.mdx)
+ [NextIdentityForOperation](../../client-api/operations/maintenance/identities/increment-next-identity.mdx)
+ [SeedIdentityForOperation](../../client-api/operations/maintenance/identities/seed-identity.mdx)
+
+* **Time series**:
+ ConfigureTimeSeriesOperation
+ ConfigureTimeSeriesPolicyOperation
+ ConfigureTimeSeriesValueNamesOperation
+ RemoveTimeSeriesPolicyOperation
+
+* **Revisions**:
+ [ConfigureRevisionsOperation](../../document-extensions/revisions/client-api/operations/configure-revisions.mdx)
+
+* **Sorters**:
+ [PutSortersOperation](../../client-api/operations/maintenance/sorters/put-sorter.mdx)
+ DeleteSorterOperation
+
+* **Sharding**:
+ [AddPrefixedShardingSettingOperation](../../sharding/administration/sharding-by-prefix.mdx#add-prefixes-after-database-creation)
+ [DeletePrefixedShardingSettingOperation](../../sharding/administration/sharding-by-prefix.mdx#removing-prefixes)
+ [UpdatePrefixedShardingSettingOperation](../../sharding/administration/sharding-by-prefix.mdx#updating-shard-configurations-for-prefixes)
+
+* **Misc**:
+ ConfigureExpirationOperation
+ ConfigureRefreshOperation
+ ConfigureDataArchivalOperation
+ UpdateDocumentsCompressionConfigurationOperation
+ DatabaseHealthCheckOperation
+ GetOperationStateOperation
+ CreateSampleDataOperation
+
+
+
+
+## Server-maintenance operations
+
+* All server-maintenance operations implement the `IServerOperation` interface.
+ The operation is executed within the **server scope**.
+ Use [forNode](../../client-api/operations/how-to/switch-operations-to-a-different-node.mdx) to operate on a specific node other than the default defined in the client configuration.
+
+* These operations include server management and configuration operations.
+ See all available operations [below](../../client-api/operations/what-are-operations#the-following-server-maintenance-operations-are-available).
+
+* To execute a server-maintenance operation request,
+ use the `send` method on the `maintenance.server` property in the DocumentStore.
+
+#### Example:
+
+
+
+{`// Define operation, e.g. get the server build number
+$getBuildNumberOp = new GetBuildNumberOperation();
+
+// Execute the operation by passing the operation to Maintenance.Server.Send
+/** @var BuildNumber $buildNumberResult */
+$buildNumberResult = $documentStore->maintenance()->server()->send($getBuildNumberOp);
+
+// Access the operation result
+$version = $buildNumberResult->getBuildVersion();
+`}
+
+
+
+##### Syntax:
+
+
+
+{`public function send(ServerOperationInterface $operation): ?object;
+`}
+
+
+
+
+
+#### The following server-maintenance operations are available:
+
+* **Client certificates**:
+ [PutClientCertificateOperation](../../client-api/operations/server-wide/certificates/put-client-certificate.mdx)
+ [CreateClientCertificateOperation](../../client-api/operations/server-wide/certificates/create-client-certificate.mdx)
+ [GetCertificatesOperation](../../client-api/operations/server-wide/certificates/get-certificates.mdx)
+ [DeleteCertificateOperation](../../client-api/operations/server-wide/certificates/delete-certificate.mdx)
+ EditClientCertificateOperation
+ GetCertificateMetadataOperation
+ ReplaceClusterCertificateOperation
+
+* **Server-wide client configuration**:
+ [PutServerWideClientConfigurationOperation](../../client-api/operations/server-wide/configuration/put-serverwide-client-configuration.mdx)
+ [GetServerWideClientConfigurationOperation](../../client-api/operations/server-wide/configuration/get-serverwide-client-configuration.mdx)
+
+* **Database management**:
+ [CreateDatabaseOperation](../../client-api/operations/server-wide/create-database.mdx)
+ [DeleteDatabasesOperation](../../client-api/operations/server-wide/delete-database.mdx)
+ [ToggleDatabasesStateOperation](../../client-api/operations/server-wide/toggle-databases-state.mdx)
+ [GetDatabaseNamesOperation](../../client-api/operations/server-wide/get-database-names.mdx)
+ [AddDatabaseNodeOperation](../../client-api/operations/server-wide/add-database-node.mdx)
+ [PromoteDatabaseNodeOperation](../../client-api/operations/server-wide/promote-database-node.mdx)
+ [ReorderDatabaseMembersOperation](../../client-api/operations/server-wide/reorder-database-members.mdx)
+ [CompactDatabaseOperation](../../client-api/operations/server-wide/compact-database.mdx)
+ GetDatabaseRecordOperation
+ SetDatabasesLockOperation
+ CreateDatabaseOperationWithoutNameValidation
+ SetDatabaseDynamicDistributionOperation
+ ModifyDatabaseTopologyOperation
+ UpdateDatabaseOperation
+ UpdateUnusedDatabasesOperation
+
+* **Server-wide ongoing tasks**:
+ DeleteServerWideTaskOperation
+ ToggleServerWideTaskStateOperation
+
+* **Server-wide replication tasks**:
+ PutServerWideExternalReplicationOperation
+ GetServerWideExternalReplicationOperation
+ GetServerWideExternalReplicationsOperation
+
+* **Server-wide backup tasks**:
+ PutServerWideBackupConfigurationOperation
+ GetServerWideBackupConfigurationOperation
+ GetServerWideBackupConfigurationsOperation
+ RestoreBackupOperation
+
+* **Server-wide analyzers**:
+ PutServerWideAnalyzersOperation
+ DeleteServerWideAnalyzerOperation
+
+* **Server-wide sorters**:
+ [PutServerWideSortersOperation](../../client-api/operations/server-wide/sorters/put-sorter-server-wide.mdx)
+ DeleteServerWideSorterOperation
+
+* **Logs & debug**:
+ SetLogsConfigurationOperation
+ GetLogsConfigurationOperation
+ GetClusterDebugInfoPackageOperation
+ [GetBuildNumberOperation](../../client-api/operations/server-wide/get-build-number.mdx)
+ GetServerWideOperationStateOperation
+
+* **Traffic watch**:
+ PutTrafficWatchConfigurationOperation
+ GetTrafficWatchConfigurationOperation
+
+* **Revisions**:
+ [ConfigureRevisionsForConflictsOperation](../../document-extensions/revisions/client-api/operations/conflict-revisions-configuration.mdx)
+
+* **Misc**:
+ ModifyConflictSolverOperation
+ OfflineMigrationOperation
+
+
+
+
+## Manage lengthy operations
+
+* Some operations that run in the server background may take a long time to complete.
+
+* For Operations that implement an interface with type `OperationIdResult`,
+ executing the operation via the `Send` method will return an `Operation` object,
+ which can be **awaited for completion**.
+#### Wait for completion:
+
+
+
+{`public function WaitForCompletionWithTimeout(DocumentStore $documentStore, Duration $duration)
+\{
+ // Define operation, e.g. delete all discontinued products
+ // Note: This operation implements interface: 'OperationInterface'
+ $deleteByQueryOp = new DeleteByQueryOperation("from Products where Discontinued = true");
+
+ // Execute the operation
+ // Send returns an 'Operation' object that can be awaited on
+
+ /** @var Operation $operation */
+ $operation = $documentStore->operations()->sendAsync($deleteByQueryOp);
+
+ try \{
+ // Call method 'waitForCompletion()' to wait for the operation to complete.
+
+ /** @var BulkOperationResult $result */
+ $result = $operation->waitForCompletion($duration);
+
+ // The operation has finished within the specified timeframe
+ $numberOfItemsDeleted = $result->getTotal(); // Access the operation result
+
+
+ \} catch (TimeoutException $exception) \{
+ // The operation did Not finish within the specified timeframe
+ \}
+
+\}
+`}
+
+
+
+##### Syntax:
+
+
+
+{`/**
+ * Wait for operation completion.
+ *
+ * It throws TimeoutException if $duration is set and operation execution time elapses duration interval.
+ *
+ * Usage:
+ * - waitForCompletion(): void; // It will wait until operation is finished
+ * - waitForCompletion(Duration $duration); // It will wait for given duration
+ * - waitForCompletion(int $seconds); // It will wait for given seconds
+ *
+ * @param Duration|int|null $duration
+ */
+public function waitForCompletion(Duration|int|null $duration = null): void;
+`}
+
+
+
+| Parameter | Type | Description |
+|---------------|---------------------|-------------|
+| **$duration** | `Duration` or `int` |
When a duration is specified - The server will throw a `TimeoutException` if the operation has not completed within the specified time frame. The operation itself continues to run in the background, no rollback action takes place.
`null` - `waitForCompletion` will wait for the operation to complete indefinitely.
|
diff --git a/versioned_docs/version-7.1/client-api/operations/_what-are-operations-python.mdx b/versioned_docs/version-7.1/client-api/operations/_what-are-operations-python.mdx
new file mode 100644
index 0000000000..f01ea43d95
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/_what-are-operations-python.mdx
@@ -0,0 +1,442 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* The RavenDB Client API is built with the notion of layers.
+ At the top, and what you will usually interact with, are the **[documentStore](../../client-api/what-is-a-document-store.mdx)**
+ and the **[session](../../client-api/session/what-is-a-session-and-how-does-it-work.mdx)**.
+ They, in turn, are built on top of the lower-level **Operations** and **Commands** API.
+
+* **RavenDB provides direct access to this lower-level API**, allowing you to send requests
+ directly to the server via DocumentStore Operations instead of using the higher-level Session API.
+
+* In this page:
+ * [Why use operations](../../client-api/operations/what-are-operations.mdx#why-use-operations)
+ * [How operations work](../../client-api/operations/what-are-operations.mdx#how-operations-work)
+ * **Operation types**:
+ * [Common operations](../../client-api/operations/what-are-operations.mdx#common-operations)
+ * [Maintenance operations](../../client-api/operations/what-are-operations.mdx#maintenance-operations)
+ * [Server-maintenance operations](../../client-api/operations/what-are-operations.mdx#server-maintenance-operations)
+ * [Manage lengthy operations](../../client-api/operations/what-are-operations.mdx#manage-lengthy-operations)
+ * [Wait for completion](../../client-api/operations/what-are-operations.mdx#wait-for-completion)
+ * [Kill operation](../../client-api/operations/what-are-operations.mdx#kill-operation)
+
+
+## Why use operations
+
+* Operations provide **management functionality** that is not available in the context of the session, for example:
+ * Create/delete a database
+ * Execute administrative tasks
+ * Assign permissions
+ * Change server configuration, and more.
+
+* The operations are executed on the DocumentStore and are not part of the session transaction.
+
+* There are some client tasks, such as patching documents, that can be carried out either via the Session
+ ([session.advanced.patch()](../../client-api/operations/patching/single-document.mdx#array-manipulation))
+ or via an Operation on the DocumentStore ([PatchOperation](../../client-api/operations/patching/single-document.mdx#operations-api)).
+
+
+
+## How operations work
+
+* **Sending the request**:
+ Each Operation is an encapsulation of a `RavenCommand`.
+ The RavenCommand creates the HTTP request message to be sent to the relevant server endpoint.
+ The DocumentStore `OperationExecutor` sends the request and processes the results.
+* **Target node**:
+ By default, the operation will be executed on the server node that is defined by the [client configuration](../../client-api/configuration/load-balance/overview.mdx#client-logic-for-choosing-a-node).
+ However, server-maintenance operations can be executed on a specific node by using the [for_node](../../client-api/operations/how-to/switch-operations-to-a-different-node.mdx) method.
+* **Target database**:
+ By default, operations work on the default database defined in the DocumentStore.
+ However, common operations & maintenance operations can operate on a different database by using the [for_database](../../client-api/operations/how-to/switch-operations-to-a-different-database.mdx) method.
+* **Transaction scope**:
+ Operations execute as a single-node transaction.
+ If needed, data will then replicate to the other nodes in the database-group.
+* **Background operations**:
+ Some operations may take a long time to complete and can be awaited for completion.
+ Learn more [below](../../client-api/operations/what-are-operations.mdx#wait-for-completion).
+
+
+
+## Common operations
+
+* All common operations implement the `IOperation` interface.
+ The operation is executed within the **database scope**.
+ Use [for_database](../../client-api/operations/how-to/switch-operations-to-a-different-database.mdx) to operate on a specific database other than the default defined in the store.
+
+* These operations include set-based operations such as _PatchOperation_, _CounterBatchOperation_,
+ document-extensions related operations such as getting/putting an attachment, and more.
+ See all available operations [below](../../client-api/operations/what-are-operations#the-following-common-operations-are-available).
+
+* To execute a common operation request,
+ use the `send` method on the `operations` property of the DocumentStore.
+
+#### Example:
+
+
+
+{`# Define operation, e.g. get all counters info for a document
+get_counters_op = GetCountersOperation("products/1-A")
+
+# Execute the operation by passing the operation to operations.send
+all_counters_result = store.operations.send(get_counters_op)
+
+# Access the operation result
+number_of_counters = len(all_counters_result.counters)
+`}
+
+
+
+##### Syntax:
+
+
+
+{`# Available overloads:
+def send(self, operation: IOperation[_Operation_T], session_info: SessionInfo = None) -> _Operation_T: ...
+
+def send_async(self, operation: IOperation[OperationIdResult]) -> Operation: ...
+
+def send_patch_operation(self, operation: PatchOperation, session_info: SessionInfo) -> PatchStatus: ...
+
+def send_patch_operation_with_entity_class(
+ self, entity_class: _T, operation: PatchOperation, session_info: Optional[SessionInfo] = None
+) -> PatchOperation.Result[_T]: ...
+`}
+
+
+
+
+
+#### The following common operations are available:
+
+* **Attachments**:
+ [PutAttachmentOperation](../../client-api/operations/attachments/put-attachment.mdx)
+ [GetAttachmentOperation](../../client-api/operations/attachments/get-attachment.mdx)
+ [DeleteAttachmentOperation](../../client-api/operations/attachments/delete-attachment.mdx)
+
+* **Counters**:
+ [CounterBatchOperation](../../client-api/operations/counters/counter-batch.mdx)
+ [GetCountersOperation](../../client-api/operations/counters/get-counters.mdx)
+
+* **Time series**:
+ [TimeSeriesBatchOperation](../../document-extensions/timeseries/client-api/operations/append-and-delete.mdx)
+ [GetMultipleTimeSeriesOperation](../../document-extensions/timeseries/client-api/operations/get.mdx)
+ [GetTimeSeriesOperation](../../document-extensions/timeseries/client-api/operations/get.mdx)
+ GetTimeSeriesStatisticsOperation
+
+* **Revisions**:
+ [GetRevisionsOperation](../../document-extensions/revisions/client-api/operations/get-revisions.mdx)
+
+* **Patching**:
+ [PatchOperation](../../client-api/operations/patching/single-document.mdx)
+ [PatchByQueryOperation](../../client-api/operations/patching/set-based.mdx)
+
+* **Delete by query**:
+ [DeleteByQueryOperation](../../client-api/operations/common/delete-by-query.mdx)
+
+* **Compare-exchange**:
+ PutCompareExchangeValueOperation
+ GetCompareExchangeValueOperation
+ [GetCompareExchangeValuesOperation](../../compare-exchange/get-cmpxchg-items)
+ DeleteCompareExchangeValueOperation]
+
+
+
+
+## Maintenance operations
+
+* All maintenance operations implement the `IMaintenanceOperation` interface.
+ The operation is executed within the **database scope**.
+ Use [for_database](../../client-api/operations/how-to/switch-operations-to-a-different-database.mdx) to operate on a specific database other than the default defined in the store.
+
+* These operations include database management operations such as setting client configuration,
+ managing indexes & ongoing-tasks operations, getting stats, and more.
+ See all available maintenance operations [below](../../client-api/operations/what-are-operations#the-following-maintenance-operations-are-available).
+
+* To execute a maintenance operation request,
+ use the `send` method on the `maintenance` property in the DocumentStore.
+
+#### Example:
+
+
+
+{`# Define operation, e.g. stop an index
+stop_index_op = StopIndexOperation("Orders/ByCompany")
+
+# Execute the operation by passing the operation to maintenance.send
+store.maintenance.send(stop_index_op)
+
+# This specific operation returns void
+# You can send another operation to verify the index running status
+index_stats_op = GetIndexStatisticsOperation("Orders/ByCompany")
+index_stats = store.maintenance.send(index_stats_op)
+status = index_stats.status # will be "Paused"
+`}
+
+
+
+##### Syntax:
+
+
+
+{`def send(
+ self, operation: Union[VoidMaintenanceOperation, MaintenanceOperation[_Operation_T]]
+) -> Optional[_Operation_T]: ...
+
+def send_async(self, operation: MaintenanceOperation[OperationIdResult]) -> Operation: ...
+`}
+
+
+
+
+
+#### The following maintenance operations are available:
+
+* **Statistics**:
+ [GetStatisticsOperation](../../client-api/operations/maintenance/get-stats.mdx#get-database-stats)
+ [GetDetailedStatisticsOperation](../../client-api/operations/maintenance/get-stats.mdx#get-detailed-database-stats)
+ [GetCollectionStatisticsOperation](../../client-api/operations/maintenance/get-stats.mdx#get-collection-stats)
+ [GetDetailedCollectionStatisticsOperation](../../client-api/operations/maintenance/get-stats.mdx#get-detailed-collection-stats)
+
+* **Client Configuration**:
+ [PutClientConfigurationOperation](../../client-api/operations/maintenance/configuration/put-client-configuration.mdx)
+ [GetClientConfigurationOperation](../../client-api/operations/maintenance/configuration/get-client-configuration.mdx)
+
+* **Indexes**:
+ [PutIndexesOperation](../../client-api/operations/maintenance/indexes/put-indexes.mdx)
+ [SetIndexesLockOperation](../../client-api/operations/maintenance/indexes/set-index-lock.mdx)
+ [SetIndexesPriorityOperation](../../client-api/operations/maintenance/indexes/set-index-priority.mdx)
+ [GetIndexErrorsOperation](../../client-api/operations/maintenance/indexes/get-index-errors.mdx)
+ [GetIndexOperation](../../client-api/operations/maintenance/indexes/get-index.mdx)
+ [GetIndexesOperation](../../client-api/operations/maintenance/indexes/get-indexes.mdx)
+ [GetTermsOperation](../../client-api/operations/maintenance/indexes/get-terms.mdx)
+ GetIndexPerformanceStatisticsOperation
+ GetIndexStatisticsOperation
+ GetIndexesStatisticsOperation
+ GetIndexingStatusOperation
+ GetIndexStalenessOperation
+ [GetIndexNamesOperation](../../client-api/operations/maintenance/indexes/get-index-names.mdx)
+ [StartIndexOperation](../../client-api/operations/maintenance/indexes/start-index.mdx)
+ [StartIndexingOperation](../../client-api/operations/maintenance/indexes/start-indexing.mdx)
+ [StopIndexOperation](../../client-api/operations/maintenance/indexes/stop-index.mdx)
+ [StopIndexingOperation](../../client-api/operations/maintenance/indexes/stop-indexing.mdx)
+ [ResetIndexOperation](../../client-api/operations/maintenance/indexes/reset-index.mdx)
+ [DeleteIndexOperation](../../client-api/operations/maintenance/indexes/delete-index.mdx)
+ [DeleteIndexErrorsOperation](../../client-api/operations/maintenance/indexes/delete-index-errors.mdx)
+ [DisableIndexOperation](../../client-api/operations/maintenance/indexes/disable-index.mdx)
+ [EnableIndexOperation](../../client-api/operations/maintenance/indexes/enable-index.mdx)
+ [IndexHasChangedOperation](../../client-api/operations/maintenance/indexes/index-has-changed.mdx)
+
+* **Analyzers**:
+ PutAnalyzersOperation
+ DeleteAnalyzerOperation
+
+* **Ongoing tasks**:
+ [GetOngoingTaskInfoOperation](../../client-api/operations/maintenance/ongoing-tasks/ongoing-task-operations.mdx#get-ongoing-task-info)
+ [ToggleOngoingTaskStateOperation](../../client-api/operations/maintenance/ongoing-tasks/ongoing-task-operations.mdx#toggle-ongoing-task-state)
+ [DeleteOngoingTaskOperation](../../client-api/operations/maintenance/ongoing-tasks/ongoing-task-operations.mdx#delete-ongoing-task)
+
+* **ETL tasks**:
+ AddEtlOperation
+ UpdateEtlOperation
+ [ResetEtlOperation](../../client-api/operations/maintenance/etl/reset-etl.mdx)
+
+* **Replication tasks**:
+ PutPullReplicationAsHubOperation
+ GetPullReplicationTasksInfoOperation
+ GetReplicationHubAccessOperation
+ GetReplicationPerformanceStatisticsOperation
+ RegisterReplicationHubAccessOperation
+ UnregisterReplicationHubAccessOperation
+ UpdateExternalReplicationOperation
+ UpdatePullReplicationAsSinkOperation
+
+* **Backup**:
+ BackupOperation
+ GetPeriodicBackupStatusOperation
+ StartBackupOperation
+ UpdatePeriodicBackupOperation
+
+* **Connection strings**:
+ [PutConnectionStringOperation](../../client-api/operations/maintenance/connection-strings/add-connection-string.mdx)
+ [RemoveConnectionStringOperation](../../client-api/operations/maintenance/connection-strings/remove-connection-string.mdx)
+ [GetConnectionStringsOperation](../../client-api/operations/maintenance/connection-strings/get-connection-string.mdx)
+
+* **Transaction recording**:
+ StartTransactionsRecordingOperation
+ StopTransactionsRecordingOperation
+ ReplayTransactionsRecordingOperation
+
+* **Database settings**:
+ [PutDatabaseSettingsOperation](../../client-api/operations/maintenance/configuration/database-settings-operation.mdx#put-database-settings-operation)
+ [GetDatabaseSettingsOperation](../../client-api/operations/maintenance/configuration/database-settings-operation.mdx#get-database-settings-operation)
+
+* **Identities**:
+ [GetIdentitiesOperation](../../client-api/operations/maintenance/identities/get-identities.mdx)
+ [NextIdentityForOperation](../../client-api/operations/maintenance/identities/increment-next-identity.mdx)
+ [SeedIdentityForOperation](../../client-api/operations/maintenance/identities/seed-identity.mdx)
+
+* **Time series**:
+ ConfigureTimeSeriesOperation
+ ConfigureTimeSeriesPolicyOperation
+ ConfigureTimeSeriesValueNamesOperation
+ RemoveTimeSeriesPolicyOperation
+
+* **Revisions**:
+ [ConfigureRevisionsOperation](../../document-extensions/revisions/client-api/operations/configure-revisions.mdx)
+
+* **Sorters**:
+ [PutSortersOperation](../../client-api/operations/maintenance/sorters/put-sorter.mdx)
+ DeleteSorterOperation
+
+* **Sharding**:
+ [AddPrefixedShardingSettingOperation](../../sharding/administration/sharding-by-prefix.mdx#add-prefixes-after-database-creation)
+ [DeletePrefixedShardingSettingOperation](../../sharding/administration/sharding-by-prefix.mdx#removing-prefixes)
+ [UpdatePrefixedShardingSettingOperation](../../sharding/administration/sharding-by-prefix.mdx#updating-shard-configurations-for-prefixes)
+
+* **Misc**:
+ ConfigureExpirationOperation
+ ConfigureRefreshOperation
+ ConfigureDataArchivalOperation
+ UpdateDocumentsCompressionConfigurationOperation
+ DatabaseHealthCheckOperation
+ GetOperationStateOperation
+ CreateSampleDataOperation
+
+
+
+
+## Server-maintenance operations
+
+* All server-maintenance operations implement the `IServerOperation` interface.
+ The operation is executed within the **server scope**.
+ Use [for_node](../../client-api/operations/how-to/switch-operations-to-a-different-node.mdx) to operate on a specific node other than the default defined in the client configuration.
+
+* These operations include server management and configuration operations.
+ See all available operations [below](../../client-api/operations/what-are-operations#the-following-server-maintenance-operations-are-available).
+
+* To execute a server-maintenance operation request,
+ use the `send` method on the `maintenance.server` property in the DocumentStore.
+
+#### Example:
+
+
+
+{`# Define operation, e.g. get the server build number
+get_build_number_op = GetBuildNumberOperation()
+
+# Execute the operation by passing to maintenance.server.send
+build_number_result = store.maintenance.server.send(get_build_number_op)
+
+# Access the operation result
+version = build_number_result.build_version
+`}
+
+
+
+##### Syntax:
+
+
+
+{`def send(self, operation: ServerOperation[_T_OperationResult]) -> Optional[_T_OperationResult]: ...
+
+def send_async(self, operation: ServerOperation[OperationIdResult]) -> Operation: ...
+
+test_examples(self):
+with self.embedded_server.get_document_store("WhatAreOperations") as store:
+ # region operations_ex
+ # Define operation, e.g. get all counters info for a document
+ get_counters_op = GetCountersOperation("products/1-A")
+
+ # Execute the operation by passing the operation to operations.send
+ all_counters_result = store.operations.send(get_counters_op)
+
+ # Access the operation result
+ number_of_counters = len(all_counters_result.counters)
+`}
+
+
+
+
+
+#### The following server-maintenance operations are available:
+
+* **Client certificates**:
+ [PutClientCertificateOperation](../../client-api/operations/server-wide/certificates/put-client-certificate.mdx)
+ [CreateClientCertificateOperation](../../client-api/operations/server-wide/certificates/create-client-certificate.mdx)
+ [GetCertificatesOperation](../../client-api/operations/server-wide/certificates/get-certificates.mdx)
+ [DeleteCertificateOperation](../../client-api/operations/server-wide/certificates/delete-certificate.mdx)
+ EditClientCertificateOperation
+ GetCertificateMetadataOperation
+ ReplaceClusterCertificateOperation
+
+* **Server-wide client configuration**:
+ [PutServerWideClientConfigurationOperation](../../client-api/operations/server-wide/configuration/put-serverwide-client-configuration.mdx)
+ [GetServerWideClientConfigurationOperation](../../client-api/operations/server-wide/configuration/get-serverwide-client-configuration.mdx)
+
+* **Database management**:
+ [CreateDatabaseOperation](../../client-api/operations/server-wide/create-database.mdx)
+ [DeleteDatabasesOperation](../../client-api/operations/server-wide/delete-database.mdx)
+ [ToggleDatabasesStateOperation](../../client-api/operations/server-wide/toggle-databases-state.mdx)
+ [GetDatabaseNamesOperation](../../client-api/operations/server-wide/get-database-names.mdx)
+ [AddDatabaseNodeOperation](../../client-api/operations/server-wide/add-database-node.mdx)
+ [PromoteDatabaseNodeOperation](../../client-api/operations/server-wide/promote-database-node.mdx)
+ [ReorderDatabaseMembersOperation](../../client-api/operations/server-wide/reorder-database-members.mdx)
+ [CompactDatabaseOperation](../../client-api/operations/server-wide/compact-database.mdx)
+ GetDatabaseRecordOperation
+ SetDatabasesLockOperation
+ CreateDatabaseOperationWithoutNameValidation
+ SetDatabaseDynamicDistributionOperation
+ ModifyDatabaseTopologyOperation
+ UpdateDatabaseOperation
+ UpdateUnusedDatabasesOperation
+
+* **Server-wide ongoing tasks**:
+ DeleteServerWideTaskOperation
+ ToggleServerWideTaskStateOperation
+
+* **Server-wide replication tasks**:
+ PutServerWideExternalReplicationOperation
+ GetServerWideExternalReplicationOperation
+ GetServerWideExternalReplicationsOperation
+
+* **Server-wide backup tasks**:
+ PutServerWideBackupConfigurationOperation
+ GetServerWideBackupConfigurationOperation
+ GetServerWideBackupConfigurationsOperation
+ RestoreBackupOperation
+
+* **Server-wide analyzers**:
+ PutServerWideAnalyzersOperation
+ DeleteServerWideAnalyzerOperation
+
+* **Server-wide sorters**:
+ [PutServerWideSortersOperation](../../client-api/operations/server-wide/sorters/put-sorter-server-wide.mdx)
+ DeleteServerWideSorterOperation
+
+* **Logs & debug**:
+ SetLogsConfigurationOperation
+ GetLogsConfigurationOperation
+ GetClusterDebugInfoPackageOperation
+ [GetBuildNumberOperation](../../client-api/operations/server-wide/get-build-number.mdx)
+ GetServerWideOperationStateOperation
+
+* **Traffic watch**:
+ PutTrafficWatchConfigurationOperation
+ GetTrafficWatchConfigurationOperation
+
+* **Revisions**:
+ [ConfigureRevisionsForConflictsOperation](../../document-extensions/revisions/client-api/operations/conflict-revisions-configuration.mdx)
+
+* **Misc**:
+ ModifyConflictSolverOperation
+ OfflineMigrationOperation
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/attachments/_category_.json b/versioned_docs/version-7.1/client-api/operations/attachments/_category_.json
new file mode 100644
index 0000000000..b2b7ed7266
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/attachments/_category_.json
@@ -0,0 +1,4 @@
+{
+ "position": 5,
+ "label": Attachments,
+}
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/attachments/_delete-attachment-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/attachments/_delete-attachment-csharp.mdx
new file mode 100644
index 0000000000..b22232e875
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/attachments/_delete-attachment-csharp.mdx
@@ -0,0 +1,32 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+This operation is used to delete an attachment from a document.
+
+## Syntax
+
+
+
+{`public DeleteAttachmentOperation(string documentId, string name, string changeVector = null)
+`}
+
+
+
+| Parameter | | |
+|------------------|--------|-------------------------------------------------------------------------|
+| **documentId** | string | ID of a document containing an attachment |
+| **name** | string | Name of an attachment |
+| **changeVector** | string | Entity changeVector, used for concurrency checks (`null` to skip check) |
+
+## Example
+
+
+
+{`store.Operations.Send(new DeleteAttachmentOperation("orders/1-A", "invoice.pdf"));
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/attachments/_delete-attachment-java.mdx b/versioned_docs/version-7.1/client-api/operations/attachments/_delete-attachment-java.mdx
new file mode 100644
index 0000000000..eac3cba4bb
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/attachments/_delete-attachment-java.mdx
@@ -0,0 +1,35 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+This operation is used to delete an attachment from a document.
+
+## Syntax
+
+
+
+{`DeleteAttachmentOperation(String documentId, String name)
+
+DeleteAttachmentOperation(String documentId, String name, String changeVector)
+`}
+
+
+
+| Parameter | | |
+|------------------|--------|-------------------------------------------------------------------------|
+| **documentId** | String | ID of a document containing an attachment |
+| **name** | String | Name of an attachment |
+| **changeVector** | String | Entity changeVector, used for concurrency checks (`null` to skip check) |
+
+## Example
+
+
+
+{`store.operations().send(
+ new DeleteAttachmentOperation("orders/1-A", "invoice.pdf"));
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/attachments/_delete-attachment-nodejs.mdx b/versioned_docs/version-7.1/client-api/operations/attachments/_delete-attachment-nodejs.mdx
new file mode 100644
index 0000000000..05e8c3117d
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/attachments/_delete-attachment-nodejs.mdx
@@ -0,0 +1,50 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use the `DeleteAttachmentOperation` to delete an attachment from a document.
+
+* In this page:
+
+ * [Delete attachment example](../../../client-api/operations/attachments/delete-attachment.mdx#delete-attachment-example)
+ * [Syntax](../../../client-api/operations/attachments/delete-attachment.mdx#syntax)
+
+
+## Delete attachment example
+
+
+
+{`// Define the delete attachment operation
+const deleteAttachmentOp = new DeleteAttachmentOperation("employees/1-A", "photo.jpg");
+
+// Execute the operation by passing it to operations.send
+await documentStore.operations.send(deleteAttachmentOp);
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`// Available overloads:
+const deleteAttachmentOp = new DeleteAttachmentOperation(documentId, name);
+const deleteAttachmentOp = new DeleteAttachmentOperation(documentId, name, changeVector);
+`}
+
+
+
+| Parameter | Type | Description |
+|------------------|----------|-----------------------------------------------------------------------------------|
+| __documentId__ | `string` | ID of document from which attachment will be removed |
+| __name__ | `string` | Name of attachment to delete |
+| __changeVector__ | `string` | ChangeVector of attachment, used for concurrency checks (`null` to skip check) |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/attachments/_get-attachment-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/attachments/_get-attachment-csharp.mdx
new file mode 100644
index 0000000000..47d869ffda
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/attachments/_get-attachment-csharp.mdx
@@ -0,0 +1,71 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+This operation is used to get an attachment from a document.
+
+## Syntax
+
+
+
+{`public GetAttachmentOperation(string documentId, string name, AttachmentType type, string changeVector)
+`}
+
+
+
+
+
+{`public class AttachmentResult
+\{
+ public Stream Stream;
+ public AttachmentDetails Details;
+\}
+
+public class AttachmentDetails : AttachmentName
+\{
+ public string ChangeVector;
+ public string DocumentId;
+\}
+
+public class AttachmentName
+\{
+ public string Name;
+ public string Hash;
+ public string ContentType;
+ public long Size;
+\}
+`}
+
+
+
+| Parameter | | |
+|------------------|----------------| ----- |
+| **documentId** | string | ID of a document which will contain an attachment |
+| **name** | string | Name of an attachment |
+| **type** | AttachmentType | Specify whether getting an attachment from a document or from a revision. (`Document` or `Revision`). |
+| **changeVector** | string | The ChangeVector of the document or the revision to which the attachment belongs. Mandatory when getting an attachment from a revision. Used for concurrency checks (use `null` to skip the check). |
+
+| Return Value | |
+| ------------- | ----- |
+| **Stream** | Stream containing an attachment |
+| **ChangeVector** | Change vector of document |
+| **DocumentId** | ID of document |
+| **Name** | Name of attachment |
+| **Hash** | Hash of attachment |
+| **ContentType** | MIME content type of an attachment |
+| **Size** | Size of attachment |
+
+## Example
+
+
+
+{`store.Operations.Send(new GetAttachmentOperation("orders/1-A",
+ "invoice.pdf",
+ AttachmentType.Document,
+ changeVector: null));
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/attachments/_get-attachment-java.mdx b/versioned_docs/version-7.1/client-api/operations/attachments/_get-attachment-java.mdx
new file mode 100644
index 0000000000..5f5c63f118
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/attachments/_get-attachment-java.mdx
@@ -0,0 +1,122 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+This operation is used to get an attachment from a document.
+
+## Syntax
+
+
+
+{`GetAttachmentOperation(String documentId, String name, AttachmentType type, String changeVector)
+`}
+
+
+
+
+
+{`public class CloseableAttachmentResult implements AutoCloseable \{
+ private AttachmentDetails details;
+ private CloseableHttpResponse response;
+
+ public InputStream getData() throws IOException \{
+ return response.getEntity().getContent();
+ \}
+
+ public AttachmentDetails getDetails() \{
+ return details;
+ \}
+\}
+
+public class AttachmentDetails extends Foo.AttachmentName \{
+ private String changeVector;
+ private String documentId;
+
+ public String getChangeVector() \{
+ return changeVector;
+ \}
+
+ public void setChangeVector(String changeVector) \{
+ this.changeVector = changeVector;
+ \}
+
+ public String getDocumentId() \{
+ return documentId;
+ \}
+
+ public void setDocumentId(String documentId) \{
+ this.documentId = documentId;
+ \}
+\}
+
+public class AttachmentName \{
+ private String name;
+ private String hash;
+ private String contentType;
+ private long size;
+
+ public String getName() \{
+ return name;
+ \}
+
+ public void setName(String name) \{
+ this.name = name;
+ \}
+
+ public String getHash() \{
+ return hash;
+ \}
+
+ public void setHash(String hash) \{
+ this.hash = hash;
+ \}
+
+ public String getContentType() \{
+ return contentType;
+ \}
+
+ public void setContentType(String contentType) \{
+ this.contentType = contentType;
+ \}
+
+ public long getSize() \{
+ return size;
+ \}
+
+ public void setSize(long size) \{
+ this.size = size;
+ \}
+\}
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **documentId** | String | ID of a document which will contain an attachment |
+| **name** | String | Name of an attachment |
+| **type** | AttachmentType | Specify whether getting an attachment from a document or from a revision. (`DOCUMENT` or `REVISION`). |
+| **changeVector** | String | The ChangeVector of the document or the revision to which the attachment belongs. Mandatory when getting an attachment from a revision. Used for concurrency checks (use `null` to skip the check). |
+
+| Return Value | |
+| ------------- | ----- |
+| **Stream** | InputStream containing an attachment |
+| **ChangeVector** | Change vector of document |
+| **DocumentId** | ID of document |
+| **Name** | Name of attachment |
+| **Hash** | Hash of attachment |
+| **ContentType** | MIME content type of an attachment |
+| **Size** | Size of attachment |
+
+## Example
+
+
+
+{`store.operations().send(
+ new GetAttachmentOperation("orders/1-A", "invoice.pdf", AttachmentType.DOCUMENT, null));
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/attachments/_get-attachment-nodejs.mdx b/versioned_docs/version-7.1/client-api/operations/attachments/_get-attachment-nodejs.mdx
new file mode 100644
index 0000000000..7a4b8ed098
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/attachments/_get-attachment-nodejs.mdx
@@ -0,0 +1,99 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use the `GetAttachmentOperation` to retrieve an attachment from a document.
+
+* In this page:
+
+ * [Get attachment example](../../../client-api/operations/attachments/get-attachment.mdx#get-attachment-example)
+ * [Syntax](../../../client-api/operations/attachments/get-attachment.mdx#syntax)
+
+
+## Get attachment example
+
+
+
+{`// Define the get attachment operation
+const getAttachmentOp = new GetAttachmentOperation("employees/1-A", "attachmentName.txt", "Document", null);
+
+// Execute the operation by passing it to operations.send
+const attachmentResult = await documentStore.operations.send(getAttachmentOp);
+
+// Retrieve attachment content:
+attachmentResult.data
+ .pipe(fs.createWriteStream("attachment"))
+ .on("finish", () => \{
+ fs.readFile("attachment", "utf8", (err, data) => \{
+ if (err) \{
+ console.error("Error reading file:", err);
+ return;
+ \}
+ console.log("Content of attachment:", data);
+ next();
+ \});
+ \});
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`const getAttachmentOp = new GetAttachmentOperation(documentId, name, type, changeVector);
+`}
+
+
+
+| Parameter | Type | Description |
+|------------------|----------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| __documentId__ | `string` | Document ID that contains the attachment. |
+| __name__ | `string` | Name of attachment to get. |
+| __type__ | `string` | Specify whether getting an attachment from a document or from a revision. (`"Document"` or `"Revision"`). |
+| __changeVector__ | `string` | The ChangeVector of the document or the revision to which the attachment belongs. Mandatory when getting an attachment from a revision. Used for concurrency checks (use `null` to skip the check). |
+
+| Return Value of `store.operations.send(getAttachmentOp)` | |
+|-----------------------------------------------------------|-----------------------------------------|
+| `AttachmentResult` | An instance of class `AttachmentResult` |
+
+
+
+{`class AttachmentResult \{
+ data; // Stream containing the attachment content
+ details; // The AttachmentDetails object
+\}
+
+// The AttachmentDetails object:
+// =============================
+\{
+ // Change vector of the document that contains the attachment
+ changeVector; // string
+
+ // ID of the document that contains the attachment
+ documentId?; // string
+
+ // Name of attachment
+ name; // string;
+
+ // Hash of attachment
+ hash; // string;
+
+ // Content type of attachment
+ contentType; // string
+
+ // Size of attachment
+ size; // number
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/attachments/_put-attachment-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/attachments/_put-attachment-csharp.mdx
new file mode 100644
index 0000000000..8e86ea993d
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/attachments/_put-attachment-csharp.mdx
@@ -0,0 +1,71 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+This operation is used to put an attachment to a document.
+
+## Syntax
+
+
+
+{`public PutAttachmentOperation(string documentId,
+ string name,
+ Stream stream,
+ string contentType = null,
+ string changeVector = null)
+`}
+
+
+
+
+
+{`public class AttachmentDetails : AttachmentName
+\{
+ public string ChangeVector;
+ public string DocumentId;
+\}
+
+public class AttachmentName
+\{
+ public string Name;
+ public string Hash;
+ public string ContentType;
+ public long Size;
+\}
+`}
+
+
+
+| Parameter | | |
+|------------------|--------|-------------------------------------------------------------------------|
+| **documentId** | string | ID of a document which will contain an attachment |
+| **name** | string | Name of an attachment |
+| **stream** | Stream | Stream contains attachment raw bytes |
+| **contentType** | string | MIME type of attachment |
+| **changeVector** | string | Entity changeVector, used for concurrency checks (`null` to skip check) |
+
+| Return Value | |
+|------------------|-------------------------------------|
+| **ChangeVector** | Change vector of created attachment |
+| **DocumentId** | ID of document |
+| **Name** | Name of created attachment |
+| **Hash** | Hash of created attachment |
+| **ContentType** | MIME content type of attachment |
+| **Size** | Size of attachment |
+
+## Example
+
+
+
+{`AttachmentDetails attachmentDetails =
+ store.Operations.Send(
+ new PutAttachmentOperation("orders/1-A",
+ "invoice.pdf",
+ stream,
+ "application/pdf"));
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/attachments/_put-attachment-java.mdx b/versioned_docs/version-7.1/client-api/operations/attachments/_put-attachment-java.mdx
new file mode 100644
index 0000000000..4bfe0db7f3
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/attachments/_put-attachment-java.mdx
@@ -0,0 +1,116 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+This operation is used to put an attachment to a document.
+
+## Syntax
+
+
+
+{`PutAttachmentOperation(String documentId, String name, InputStream stream)
+
+PutAttachmentOperation(String documentId, String name, InputStream stream, String contentType)
+
+PutAttachmentOperation(String documentId, String name, InputStream stream, String contentType, String changeVector)
+`}
+
+
+
+
+
+{`public class AttachmentDetails extends AttachmentName \{
+ private String changeVector;
+ private String documentId;
+
+ public String getChangeVector() \{
+ return changeVector;
+ \}
+
+ public void setChangeVector(String changeVector) \{
+ this.changeVector = changeVector;
+ \}
+
+ public String getDocumentId() \{
+ return documentId;
+ \}
+
+ public void setDocumentId(String documentId) \{
+ this.documentId = documentId;
+ \}
+\}
+
+public class AttachmentName \{
+ private String name;
+ private String hash;
+ private String contentType;
+ private long size;
+
+ public String getName() \{
+ return name;
+ \}
+
+ public void setName(String name) \{
+ this.name = name;
+ \}
+
+ public String getHash() \{
+ return hash;
+ \}
+
+ public void setHash(String hash) \{
+ this.hash = hash;
+ \}
+
+ public String getContentType() \{
+ return contentType;
+ \}
+
+ public void setContentType(String contentType) \{
+ this.contentType = contentType;
+ \}
+
+ public long getSize() \{
+ return size;
+ \}
+
+ public void setSize(long size) \{
+ this.size = size;
+ \}
+\}
+`}
+
+
+
+| Parameter | | |
+|------------------| ------------- | ----- |
+| **documentId** | String | ID of a document which will contain an attachment |
+| **name** | String | Name of an attachment |
+| **stream** | InputStream | Stream contains attachment raw bytes |
+| **contentType** | String | MIME type of attachment |
+| **changeVector** | String | Entity changeVector, used for concurrency checks (`null` to skip check) |
+
+| Return Value | |
+| ------------- | ----- |
+| **ChangeVector** | Change vector of created attachment |
+| **DocumentId** | ID of document |
+| **Name** | Name of created attachment |
+| **Hash** | Hash of created attachment |
+| **ContentType** | MIME content type of attachment |
+| **Size** | Size of attachment |
+
+## Example
+
+
+
+{`AttachmentDetails attachmentDetails = store
+ .operations().send(new PutAttachmentOperation("orders/1-A",
+ "invoice.pdf",
+ stream,
+ "application/pdf"));
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/attachments/_put-attachment-nodejs.mdx b/versioned_docs/version-7.1/client-api/operations/attachments/_put-attachment-nodejs.mdx
new file mode 100644
index 0000000000..b81b0f1ab2
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/attachments/_put-attachment-nodejs.mdx
@@ -0,0 +1,89 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use the `PutAttachmentOperation` to add an attachment to a document.
+
+* In this page:
+
+ * [Put attachment example](../../../client-api/operations/attachments/put-attachment.mdx#put-attachment-example)
+ * [Syntax](../../../client-api/operations/attachments/put-attachment.mdx#syntax)
+
+
+## Put attachment example
+
+
+
+{`// Prepare content to attach
+const text = "Some content...";
+const byteArray = Buffer.from(text);
+
+// Define the put attachment operation
+const putAttachmentOp = new PutAttachmentOperation(
+ "employees/1-A", "attachmentName.txt", byteArray, "text/plain");
+
+// Execute the operation by passing it to operations.send
+const attachmentDetails = await documentStore.operations.send(putAttachmentOp);
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`// Available overloads:
+const putAttachmentOp = new PutAttachmentOperation(documentId, name, stream);
+const putAttachmentOp = new PutAttachmentOperation(documentId, name, stream, contentType);
+const putAttachmentOp = new PutAttachmentOperation(documentId, name, stream, contentType, changeVector);
+`}
+
+
+
+| Parameter | Type | Description |
+|------------------|------------------------------|-----------------------------------------------------------------------------------|
+| __documentId__ | `string` | Document ID to which the attachment will be added |
+| __name__ | `string` | Name of attachment to put |
+| __stream__ | `stream.Readable` / `Buffer` | A stream that contains the raw bytes of the attachment |
+| __contentType__ | `string` | Content type of attachment |
+| __changeVector__ | `string` | ChangeVector of attachment, used for concurrency checks (`null` to skip check) |
+
+| Return Value of `store.operations.send(putAttachmentOp)` | |
+|----------------------------------------------------------|---------------------------------------------|
+| `object` | An object with the new attachment's details |
+
+
+
+{`// The AttachmentDetails object:
+// =============================
+\{
+ // Change vector of attachment
+ changeVector; // string
+
+ // ID of the document that contains the attachment
+ documentId?; // string
+
+ // Name of attachment
+ name; // string;
+
+ // Hash of attachment
+ hash; // string;
+
+ // Content type of attachment
+ contentType; // string
+
+ // Size of attachment
+ size; // number
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/attachments/delete-attachment.mdx b/versioned_docs/version-7.1/client-api/operations/attachments/delete-attachment.mdx
new file mode 100644
index 0000000000..b2e2842ac0
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/attachments/delete-attachment.mdx
@@ -0,0 +1,39 @@
+---
+title: "Delete Attachment Operation"
+hide_table_of_contents: true
+sidebar_label: Delete Attachment
+sidebar_position: 2
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import DeleteAttachmentCsharp from './_delete-attachment-csharp.mdx';
+import DeleteAttachmentJava from './_delete-attachment-java.mdx';
+import DeleteAttachmentNodejs from './_delete-attachment-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/attachments/get-attachment.mdx b/versioned_docs/version-7.1/client-api/operations/attachments/get-attachment.mdx
new file mode 100644
index 0000000000..e7f476f546
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/attachments/get-attachment.mdx
@@ -0,0 +1,39 @@
+---
+title: "Get Attachment Operation"
+hide_table_of_contents: true
+sidebar_label: Get Attachment
+sidebar_position: 1
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import GetAttachmentCsharp from './_get-attachment-csharp.mdx';
+import GetAttachmentJava from './_get-attachment-java.mdx';
+import GetAttachmentNodejs from './_get-attachment-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/attachments/put-attachment.mdx b/versioned_docs/version-7.1/client-api/operations/attachments/put-attachment.mdx
new file mode 100644
index 0000000000..91e8871301
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/attachments/put-attachment.mdx
@@ -0,0 +1,39 @@
+---
+title: "Put Attachment Operation"
+hide_table_of_contents: true
+sidebar_label: Put Attachment
+sidebar_position: 0
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import PutAttachmentCsharp from './_put-attachment-csharp.mdx';
+import PutAttachmentJava from './_put-attachment-java.mdx';
+import PutAttachmentNodejs from './_put-attachment-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/common/_category_.json b/versioned_docs/version-7.1/client-api/operations/common/_category_.json
new file mode 100644
index 0000000000..8ab6dded8e
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/common/_category_.json
@@ -0,0 +1,4 @@
+{
+ "position": 2,
+ "label": Common Operations,
+}
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/common/_delete-by-query-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/common/_delete-by-query-csharp.mdx
new file mode 100644
index 0000000000..90c216fc7e
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/common/_delete-by-query-csharp.mdx
@@ -0,0 +1,386 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `DeleteByQueryOperation` to delete a large number of documents that match the provided query in a single server call.
+
+* **Dynamic behavior**:
+ The deletion of documents matching the specified query is performed in batches of size 1024.
+ During the deletion process, documents that are added/modified **after** the delete operation has started
+ may also be deleted if they match the query criteria.
+
+* **Background operation**:
+ This operation is performed in the background on the server.
+ If needed, you can wait for the operation to complete. See: [Wait for completion](../../../client-api/operations/what-are-operations.mdx#wait-for-completion).
+
+* **Operation scope**:
+ `DeleteByQueryOperation` runs as a single-node transaction, not a cluster-wide transaction. As a result,
+ if you use this operation to delete documents that were originally created using a cluster-wide transaction,
+ their associated [Atomic guards](../../../client-api/session/cluster-transaction/atomic-guards.mdx) will Not be deleted.
+
+ * To avoid issues when recreating such documents using a cluster-wide session,
+ see [Best practice when storing a document](../../../client-api/session/cluster-transaction/atomic-guards.mdx#best-practice-when-storing-a-document-in-a-cluster-wide-transaction).
+ * To learn more about the differences between transaction types,
+ see [Cluster-wide transaction vs. Single-node transaction](../../../client-api/session/cluster-transaction/overview.mdx#cluster-wide-transaction-vs-single-node-transaction).
+* In this article:
+ * [Delete by dynamic query](../../../client-api/operations/common/delete-by-query.mdx#delete-by-dynamic-query)
+ * [Delete by index query](../../../client-api/operations/common/delete-by-query.mdx#delete-by-index-query)
+ * [Syntax](../../../client-api/operations/common/delete-by-query.mdx#syntax)
+
+
+
+## Delete by dynamic query
+
+
+
+##### Delete all documents in a collection
+
+
+
+
+{`// Define the delete by query operation, pass an RQL querying a collection
+var deleteByQueryOp = new DeleteByQueryOperation("from 'Orders'");
+
+// Execute the operation by passing it to Operations.Send
+var operation = store.Operations.Send(deleteByQueryOp);
+
+// All documents in collection 'Orders' will be deleted from the server.
+`}
+
+
+
+
+{`// Define the delete by query operation, pass an RQL querying a collection
+var deleteByQueryOp = new DeleteByQueryOperation("from 'Orders'");
+
+// Execute the operation by passing it to Operations.SendAsync
+var result = await store.Operations.SendAsync(deleteByQueryOp);
+
+// All documents in collection 'Orders' will be deleted from the server.
+`}
+
+
+
+
+{`from "Orders"
+`}
+
+
+
+
+
+
+
+##### Delete with filtering
+
+
+
+
+{`// Define the delete by query operation, pass an RQL querying a collection
+var deleteByQueryOp = new DeleteByQueryOperation("from 'Orders' where Freight > 30");
+
+// Execute the operation by passing it to Operations.Send
+var operation = store.Operations.Send(deleteByQueryOp);
+
+// * All documents matching the specified RQL will be deleted from the server.
+
+// * Since the dynamic query was made with a filtering condition,
+// an auto-index is generated (if no other matching auto-index already exists).
+`}
+
+
+
+
+{`// Define the delete by query operation, pass an RQL querying a collection
+var deleteByQueryOp = new DeleteByQueryOperation("from 'Orders' where Freight > 30");
+
+// Execute the operation by passing it to Operations.SendAsync
+var result = await store.Operations.SendAsync(deleteByQueryOp);
+
+// * All documents matching the provided RQL will be deleted from the server.
+
+// * Since a dynamic query was made with a filtering condition,
+// an auto-index is generated (if no other matching auto-index already exists).
+`}
+
+
+
+
+{`from "Orders" where Freight > 30
+`}
+
+
+
+
+
+
+
+## Delete by index query
+
+* `DeleteByQueryOperation` can only be performed on a **Map-index**.
+ An exception is thrown when executing the operation on a Map-Reduce index.
+
+* A few overloads are available, see the following examples:
+
+
+##### A sample Map-index
+
+
+
+{`// The index definition:
+// =====================
+
+public class Products_ByPrice : AbstractIndexCreationTask
+\{
+ public class IndexEntry
+ \{
+ public decimal Price \{ get; set; \}
+ \}
+
+ public Products_ByPrice()
+ \{
+ Map = products => from product in products
+ select new IndexEntry
+ \{
+ Price = product.PricePerUnit
+ \};
+ \}
+\}
+`}
+
+
+
+
+
+
+##### Delete documents via an index query
+
+
+
+
+{`// Define the delete by query operation, pass an RQL querying the index
+var deleteByQueryOp =
+ new DeleteByQueryOperation("from index 'Products/ByPrice' where Price > 10");
+
+// Execute the operation by passing it to Operations.Send
+var operation = store.Operations.Send(deleteByQueryOp);
+
+// All documents with document-field PricePerUnit > 10 will be deleted from the server.
+`}
+
+
+
+
+{`// Define the delete by query operation
+var deleteByQueryOp = new DeleteByQueryOperation(new IndexQuery
+{
+ // Provide an RQL querying the index
+ Query = "from index 'Products/ByPrice' where Price > 10"
+});
+
+// Execute the operation by passing it to Operations.Send
+var operation = store.Operations.Send(deleteByQueryOp);
+
+// All documents with document-field PricePerUnit > 10 will be deleted from the server.
+`}
+
+
+
+
+{`// Define the delete by query operation
+var deleteByQueryOp =
+ // Pass parameters:
+ // * The index name
+ // * A filtering expression on the index-field
+ new DeleteByQueryOperation("Products/ByPrice",
+ x => x.Price > 10);
+
+// Execute the operation by passing it to Operations.Send
+var operation = store.Operations.Send(deleteByQueryOp);
+
+// All documents with document-field PricePerUnit > 10 will be deleted from the server.
+`}
+
+
+
+
+{`// Define the delete by query operation
+var deleteByQueryOp =
+ // Pass param:
+ // * A filtering expression on the index-field
+ new DeleteByQueryOperation(
+ x => x.Price > 10);
+
+// Execute the operation by passing it to Operations.Send
+var operation = store.Operations.Send(deleteByQueryOp);
+
+// All documents with document-field PricePerUnit > 10 will be deleted from the server.
+`}
+
+
+
+
+{`from index "Products/ByPrice" where Price > 10
+`}
+
+
+
+
+
+
+
+##### Delete with options
+
+
+
+
+{`// Define the delete by query operation
+var deleteByQueryOp = new DeleteByQueryOperation(
+ // QUERY: Specify the query
+ new IndexQuery
+ {
+ Query = "from index 'Products/ByPrice' where Price > 10"
+ },
+ // OPTIONS: Specify the options for the operation
+ // (See all other available options in the Syntax section below)
+ new QueryOperationOptions
+ {
+ // Allow the operation to operate even if index is stale
+ AllowStale = true,
+ // Get info in the operation result about documents that were deleted
+ RetrieveDetails = true
+ });
+
+// Execute the operation by passing it to Operations.Send
+Operation operation = store.Operations.Send(deleteByQueryOp);
+
+// Wait for operation to complete
+var result = operation.WaitForCompletion(TimeSpan.FromSeconds(15));
+
+// * All documents with document-field PricePerUnit > 10 will be deleted from the server.
+
+// * Details about deleted documents are available:
+var details = result.Details;
+var documentIdThatWasDeleted = details[0].ToJson()["Id"];
+`}
+
+
+
+
+{`// Define the delete by query operation
+var deleteByQueryOp = new DeleteByQueryOperation(
+ // QUERY: Specify the query
+ new IndexQuery
+ {
+ Query = "from index 'Products/ByPrice' where Price > 10"
+ },
+ // OPTIONS: Specify the options for the operation
+ // (See all other available options in the Syntax section below)
+ new QueryOperationOptions
+ {
+ // Allow the operation to operate even if index is stale
+ AllowStale = true,
+ // Get info in the operation result about documents that were deleted
+ RetrieveDetails = true
+ });
+
+// Execute the operation by passing it to Operations.Send
+Operation operation = await store.Operations.SendAsync(deleteByQueryOp);
+
+// Wait for operation to complete
+BulkOperationResult result =
+ await operation.WaitForCompletionAsync(TimeSpan.FromSeconds(15))
+ .ConfigureAwait(false);
+
+// * All documents with document-field PricePerUnit > 10 will be deleted from the server.
+
+// * Details about deleted documents are available:
+var details = result.Details;
+var documentIdThatWasDeleted = details[0].ToJson()["Id"];
+`}
+
+
+
+
+{`from index "Products/ByPrice" where Price > 10
+`}
+
+
+
+
+* Specifying `QueryOperationOptions` is also supported by the other overload methods, see the Syntax section below.
+
+
+
+
+## Syntax
+
+
+
+{`// Available overload:
+// ===================
+
+DeleteByQueryOperation DeleteByQueryOperation(
+ string queryToDelete);
+
+DeleteByQueryOperation DeleteByQueryOperation(
+ IndexQuery queryToDelete,
+ QueryOperationOptions options = null);
+
+DeleteByQueryOperation DeleteByQueryOperation(
+ string indexName,
+ Expression> expression,
+ QueryOperationOptions options = null);
+
+DeleteByQueryOperation DeleteByQueryOperation(
+ Expression> expression,
+ QueryOperationOptions options = null)
+ where TIndexCreator : AbstractIndexCreationTask, new();
+`}
+
+
+
+| Parameter | Type | Description |
+|-------------------|-----------------------------|------------------------------------------------------------|
+| **queryToDelete** | string | The RQL query to perform |
+| **queryToDelete** | `IndexQuery` | Holds all the information required to query an index |
+| **indexName** | string | The name of the index queried |
+| **expression** | `Expression>` | The expression that defines the query criteria |
+| **options** | `QueryOperationOptions` | Object holding different setting options for the operation |
+
+
+
+{`public class QueryOperationOptions
+\{
+ // Indicates whether operations are allowed on stale indexes.
+ // DEFAULT: false
+ public bool AllowStale \{ get; set; \}
+
+ // If AllowStale is set to false and index is stale,
+ // then this is the maximum timeout to wait for index to become non-stale.
+ // If timeout is exceeded then exception is thrown.
+ // DEFAULT: null (if index is stale then exception is thrown immediately)
+ public TimeSpan? StaleTimeout \{ get; set; \}
+
+ // Limits the number of base operations per second allowed.
+ // DEFAULT: no limit
+ public int? MaxOpsPerSecond
+
+ // Determines whether operation details about each document should be returned by server.
+ // DEFAULT: false
+ public bool RetrieveDetails \{ get; set; \}
+
+ // Ignore the maximum number of statements a script can execute.
+ // Note: this is only relevant for the PatchByQueryOperation.
+ public bool IgnoreMaxStepsForScript \{ get; set; \}
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/common/_delete-by-query-java.mdx b/versioned_docs/version-7.1/client-api/operations/common/_delete-by-query-java.mdx
new file mode 100644
index 0000000000..0797865693
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/common/_delete-by-query-java.mdx
@@ -0,0 +1,129 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+`DeleteByQueryOperation` gives you the ability to delete a large number of documents with a single query.
+This operation is performed in the background on the server.
+
+## Syntax
+
+
+
+{`public DeleteByQueryOperation(IndexQuery queryToDelete)
+
+public DeleteByQueryOperation(IndexQuery queryToDelete, QueryOperationOptions options)
+`}
+
+
+
+| Parameters | Type | Description |
+| ------------- | ------------- | ----- |
+| **indexName** | String | Name of an index to perform a query on |
+| **queryToDelete** | IndexQuery | Holds all the information required to query an index |
+| **options** | QueryOperationOptions | Holds different setting options for base operations |
+
+## Example I
+
+
+
+
+{`// remove all documents from the server where Name == Bob using Person/ByName index
+store
+ .operations()
+ .send(new DeleteByQueryOperation(new IndexQuery("from Persons where name = 'Bob'")));
+`}
+
+
+
+
+{`from index 'Person/ByName' where Name = 'Bob'
+`}
+
+
+
+
+
+## Example II
+
+
+
+
+{`// remove all documents from the server where Age > 35 using Person/ByAge index
+store
+ .operations()
+ .send(new DeleteByQueryOperation(new IndexQuery("from 'Person/ByAge' where age < 35")));
+`}
+
+
+
+
+{`from index 'Person/ByName' where Age < 35
+`}
+
+
+
+
+## Example III
+
+
+
+
+{`// delete multiple docs with specific ids in a single run without loading them into the session
+Operation operation = store
+ .operations()
+ .sendAsync(new DeleteByQueryOperation(new IndexQuery(
+ "from People u where id(u) in ('people/1-A', 'people/3-A')"
+ )));
+`}
+
+
+
+
+{`from People u where id(u) in ('people/1-A', 'people/3-A')
+`}
+
+
+
+
+
+`DeleteByQueryOperation` is performed in the background on the server.
+You have the option to **wait** for it using `waitForCompletion`.
+
+
+
+
+{`// remove all document from server where Name == Bob and Age >= 29 using People collection
+Operation operation = store.operations()
+ .sendAsync(new DeleteByQueryOperation(new IndexQuery(
+ "from People where Name = 'Bob' and Age >= 29"
+ )));
+
+operation.waitForCompletion();
+`}
+
+
+
+
+{`from People where Name = 'Bob' and Age >= 29
+`}
+
+
+
+
+
+## Remarks
+
+
+`DeleteByQueryOperation` can only be performed on a map index. Executing it on map-reduce index will lead to an exception.
+
+
+
+
+The deletion of documents matching a specified query is run in batches of size 1024. RavenDB doesn't do concurrency checks during the operation
+so it can happen than a document has been updated or deleted meanwhile.
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/common/_delete-by-query-nodejs.mdx b/versioned_docs/version-7.1/client-api/operations/common/_delete-by-query-nodejs.mdx
new file mode 100644
index 0000000000..2995c81584
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/common/_delete-by-query-nodejs.mdx
@@ -0,0 +1,253 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `DeleteByQueryOperation` to delete a large number of documents that match the provided query in a single server call.
+
+* **Dynamic behavior**:
+ The deletion of documents matching the specified query is performed in batches of size 1024.
+ During the deletion process, documents that are added/modified **after** the delete operation has started
+ may also be deleted if they match the query criteria.
+
+* **Background operation**:
+ This operation is performed in the background on the server.
+ If needed, you can wait for the operation to complete. See: [Wait for completion](../../../client-api/operations/what-are-operations.mdx#wait-for-completion).
+
+* **Operation scope**:
+ `DeleteByQueryOperation` runs as a single-node transaction, not a cluster-wide transaction. As a result,
+ if you use this operation to delete documents that were originally created using a cluster-wide transaction,
+ their associated [Atomic guards](../../../client-api/session/cluster-transaction/atomic-guards.mdx) will Not be deleted.
+
+ * To avoid issues when recreating such documents using a cluster-wide session,
+ see [Best practice when storing a document](../../../client-api/session/cluster-transaction/atomic-guards.mdx#best-practice-when-storing-a-document-in-a-cluster-wide-transaction).
+ * To learn more about the differences between transaction types,
+ see [Cluster-wide transaction vs. Single-node transaction](../../../client-api/session/cluster-transaction/overview.mdx#cluster-wide-transaction-vs-single-node-transaction).
+* In this article:
+ * [Delete by dynamic query](../../../client-api/operations/common/delete-by-query.mdx#delete-by-dynamic-query)
+ * [Delete by index query](../../../client-api/operations/common/delete-by-query.mdx#delete-by-index-query)
+ * [Syntax](../../../client-api/operations/common/delete-by-query.mdx#syntax)
+
+
+
+## Delete by dynamic query
+
+
+
+##### Delete all documents in collection
+
+
+
+
+{`// Define the delete by query operation, pass an RQL querying a collection
+const deleteByQueryOp = new DeleteByQueryOperation("from 'Orders'");
+
+// Execute the operation by passing it to operations.send
+const operation = await store.operations.send(deleteByQueryOp);
+
+// All documents in collection 'Orders' will be deleted from the server.
+`}
+
+
+
+
+{`from "Orders"
+`}
+
+
+
+
+
+
+
+##### Delete with filtering
+
+
+
+
+{`// Define the delete by query operation, pass an RQL querying a collection
+const deleteByQueryOp = new DeleteByQueryOperation("from 'Orders' where Freight > 30");
+
+// Execute the operation by passing it to operations.send
+const operation = await store.operations.send(deleteByQueryOp);
+
+// * All documents matching the specified RQL will be deleted from the server.
+
+// * Since the dynamic query was made with a filtering condition,
+// an auto-index is generated (if no other matching auto-index already exists).
+`}
+
+
+
+
+{`from "Orders" where Freight > 30
+`}
+
+
+
+
+
+
+
+## Delete by index query
+
+* `DeleteByQueryOperation` can only be performed on a **Map-index**.
+ An exception is thrown when executing the operation on a Map-Reduce index.
+
+* A few overloads are available, see the following examples:
+
+
+##### A sample Map-index
+
+
+
+{`// The index definition:
+// =====================
+
+class Products_ByPrice extends AbstractJavaScriptIndexCreationTask \{
+ constructor () \{
+ super();
+
+ this.map("products", product => \{
+ return \{
+ Price: product.PricePerUnit
+ \};
+ \});
+ \}
+\}
+`}
+
+
+
+
+
+
+##### Delete documents via an index query
+
+
+
+
+{`// Define the delete by query operation, pass an RQL querying the index
+const deleteByQueryOp =
+ new DeleteByQueryOperation("from index 'Products/ByPrice' where Price > 10");
+
+// Execute the operation by passing it to operations.send
+const operation = await store.operations.send(deleteByQueryOp);
+
+// All documents with document-field PricePerUnit > 10 will be deleted from the server.
+`}
+
+
+
+
+{`// Define the index query, provide an RQL querying the index
+const indexQuery = new IndexQuery();
+indexQuery.query = "from index 'Products/ByPrice' where Price > 10";
+
+// Define the delete by query operation
+const deleteByQueryOp = new DeleteByQueryOperation(indexQuery);
+
+// Execute the operation by passing it to operations.send
+const operation = await store.operations.send(deleteByQueryOp);
+
+// All documents with document-field PricePerUnit > 10 will be deleted from the server.
+`}
+
+
+
+
+{`from index "Products/ByPrice" where Price > 10
+`}
+
+
+
+
+
+
+
+##### Delete with options
+
+
+
+
+{`// QUERY: Define the index query, provide an RQL querying the index
+const indexQuery = new IndexQuery();
+indexQuery.query = "from index 'Products/ByPrice' where Price > 10";
+
+// OPTIONS: Define the operations options
+// (See all available options in the Syntax section below)
+const options = {
+ // Allow the operation to operate even if index is stale
+ allowStale: true,
+ // Limit the number of base operations per second allowed.
+ maxOpsPerSecond: 500
+}
+
+// Define the delete by query operation
+const deleteByQueryOp = new DeleteByQueryOperation(indexQuery, options);
+
+// Execute the operation by passing it to operations.send
+const operation = await store.operations.send(deleteByQueryOp);
+
+// All documents with document-field PricePerUnit > 10 will be deleted from the server.
+`}
+
+
+
+
+{`from index "Products/ByPrice" where Price > 10
+`}
+
+
+
+
+* Specifying `options` is also supported by the other overload methods, see the Syntax section below.
+
+
+
+
+## Syntax
+
+
+
+{`// Available overload:
+// ===================
+const deleteByQueryOp = new DeleteByQueryOperation(indexQuery);
+const deleteByQueryOp = new DeleteByQueryOperation(indexQuery, options);
+`}
+
+
+
+| Parameter | Type | Description |
+|-------------------|--------------|------------------------------------------------------------|
+| **queryToDelete** | `string` | The RQL query to perform |
+| **queryToDelete** | `IndexQuery` | Holds all the information required to query an index |
+| **options** | `object` | Object holding different setting options for the operation |
+
+
+
+{`// options object
+\{
+ // Indicates whether operations are allowed on stale indexes.
+ // DEFAULT: false
+ allowStale, // boolean
+
+ // If AllowStale is set to false and index is stale,
+ // then this is the maximum timeout to wait for index to become non-stale.
+ // If timeout is exceeded then exception is thrown.
+ // DEFAULT: null (if index is stale then exception is thrown immediately)
+ staleTimeout, // number
+
+ // Limits the number of base operations per second allowed.
+ // DEFAULT: null (no limit)
+ maxOpsPerSecond, // number
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/common/_delete-by-query-php.mdx b/versioned_docs/version-7.1/client-api/operations/common/_delete-by-query-php.mdx
new file mode 100644
index 0000000000..e45b7da850
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/common/_delete-by-query-php.mdx
@@ -0,0 +1,294 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `DeleteByQueryOperation` to delete a large number of documents that match the provided query in a single server call.
+
+* **Dynamic behavior**:
+ The deletion of documents matching the specified query is performed in batches of size 1024.
+ During the deletion process, documents that are added/modified **after** the delete operation has started
+ may also be deleted if they match the query criteria.
+
+* **Background operation**:
+ This operation is performed in the background on the server.
+ If needed, you can wait for the operation to complete. See: [Wait for completion](../../../client-api/operations/what-are-operations.mdx#wait-for-completion).
+
+* **Operation scope**:
+ `DeleteByQueryOperation` runs as a single-node transaction, not a cluster-wide transaction. As a result,
+ if you use this operation to delete documents that were originally created using a cluster-wide transaction,
+ their associated [Atomic guards](../../../client-api/session/cluster-transaction/atomic-guards.mdx) will Not be deleted.
+
+ * To avoid issues when recreating such documents using a cluster-wide session,
+ see [Best practice when storing a document](../../../client-api/session/cluster-transaction/atomic-guards.mdx#best-practice-when-storing-a-document-in-a-cluster-wide-transaction).
+ * To learn more about the differences between transaction types,
+ see [Cluster-wide transaction vs. Single-node transaction](../../../client-api/session/cluster-transaction/overview.mdx#cluster-wide-transaction-vs-single-node-transaction).
+* In this article:
+ * [Delete by dynamic query](../../../client-api/operations/common/delete-by-query.mdx#delete-by-dynamic-query)
+ * [Delete by index query](../../../client-api/operations/common/delete-by-query.mdx#delete-by-index-query)
+ * [Syntax](../../../client-api/operations/common/delete-by-query.mdx#syntax)
+
+
+
+## Delete by dynamic query
+
+
+
+##### Delete all documents in a collection
+
+
+
+
+{`// Define the delete by query operation, pass an RQL querying a collection
+$deleteByQueryOp = new DeleteByQueryOperation("from 'Orders'");
+
+// Execute the operation by passing it to Operations.Send
+$operation = $store->operations()->send($deleteByQueryOp);
+
+// All documents in collection 'Orders' will be deleted from the server.
+`}
+
+
+
+
+{`from "Orders"
+`}
+
+
+
+
+
+
+
+##### Delete with filtering
+
+
+
+
+{`// Define the delete by query operation, pass an RQL querying a collection
+$deleteByQueryOp = new DeleteByQueryOperation("from 'Orders' where Freight > 30");
+
+// Execute the operation by passing it to Operations.Send
+$operation = $store->operations()->send($deleteByQueryOp);
+
+// * All documents matching the specified RQL will be deleted from the server.
+
+// * Since the dynamic query was made with a filtering condition,
+// an auto-index is generated (if no other matching auto-index already exists).
+`}
+
+
+
+
+{`from "Orders" where Freight > 30
+`}
+
+
+
+
+
+
+
+## Delete by index query
+
+* `DeleteByQueryOperation` can only be performed on a **Map-index**.
+ An exception is thrown when executing the operation on a Map-Reduce index.
+
+* A few overloads are available, see the following examples:
+
+
+##### A sample Map-index
+
+
+
+{`// The index definition:
+// =====================
+
+class IndexEntry
+\{
+ public float $price;
+
+ public function getPrice(): float
+ \{
+ return $this->price;
+ \}
+
+ public function setPrice(float $price): void
+ \{
+ $this->price = $price;
+ \}
+\}
+
+class Products_ByPrice extends AbstractIndexCreationTask
+\{
+ public function __construct()
+ \{
+ parent::__construct();
+
+ $this->map = "from product in products select new \{price = product.PricePerUnit\}";
+ \}
+\}
+`}
+
+
+
+
+
+
+##### Delete documents via an index query
+
+
+
+
+{`// Define the delete by query operation, pass an RQL querying the index
+$deleteByQueryOp = new DeleteByQueryOperation("from index 'Products/ByPrice' where Price > 10");
+
+// Execute the operation by passing it to Operations.Send
+$operation = $store->operations()->send($deleteByQueryOp);
+
+
+// All documents with document-field PricePerUnit > 10 will be deleted from the server.
+`}
+
+
+
+
+{`// Define the delete by query operation
+$deleteByQueryOp = new DeleteByQueryOperation(
+ // Provide an RQL querying the index
+ new IndexQuery("from index 'Products/ByPrice' where Price > 10")
+);
+
+// Execute the operation by passing it to Operations.Send
+$operation = $store->operations()->send($deleteByQueryOp);
+
+// All documents with document-field PricePerUnit > 10 will be deleted from the server.
+`}
+
+
+
+
+{`from index "Products/ByPrice" where Price > 10
+`}
+
+
+
+
+
+
+
+##### Delete with options
+
+
+
+
+{`// OPTIONS: Specify the options for the operation
+// (See all other available options in the Syntax section below)
+$options = new QueryOperationOptions();
+// Allow the operation to operate even if index is stale
+$options->setAllowStale(true);
+// Get info in the operation result about documents that were deleted
+$options->setRetrieveDetails(true);
+
+// Define the delete by query operation
+$deleteByQueryOp = new DeleteByQueryOperation(
+ new IndexQuery("from index 'Products/ByPrice' where Price > 10"), // QUERY: Specify the query
+ $options // OPTIONS:
+);
+
+// Execute the operation by passing it to Operations.Send
+/** @var Operation $operation */
+$operation = $store->operations()->sendAsync($deleteByQueryOp);
+
+// Wait for operation to complete
+/** @var BulkOperationResult $result */
+$result = $operation->waitForCompletion(Duration::ofSeconds(15));
+
+// * All documents with document-field PricePerUnit > 10 will be deleted from the server.
+
+// * Details about deleted documents are available:
+$details = $result->getDetails();
+$documentIdThatWasDeleted = $details[0]->getId();
+`}
+
+
+
+
+{`from index "Products/ByPrice" where Price > 10
+`}
+
+
+
+
+
+
+
+## Syntax
+
+
+
+{`class DeleteByQueryOperation implements OperationInterface
+\{
+ /**
+ * Usage:
+ * - new DeleteByQueryOperation("from 'Orders'")
+ * - new DeleteByQueryOperation("from 'Orders'", $options)
+ *
+ * - new DeleteByQueryOperation(new IndexQuery("from 'Orders'"))
+ * - new DeleteByQueryOperation(new IndexQuery("from 'Orders'"), $options)
+ *
+ * @param IndexQuery|string|null $queryToDelete
+ * @param QueryOperationOptions|null $options
+ */
+ public function __construct(IndexQuery|string|null $queryToDelete, ?QueryOperationOptions $options = null) \{
+ // ...
+ \}
+
+ // ...
+\}
+`}
+
+
+
+| Parameter | Type | Description |
+|--------------------|--------------------------|------------------------------------------------------------|
+| **$queryToDelete** | `string` | The RQL query to perform |
+| **$queryToDelete** | `IndexQuery` | Holds all the information required to query an index |
+| **$options** | `?QueryOperationOptions` | Object holding different setting options for the operation |
+
+
+
+{`class QueryOperationOptions
+\{
+ // Indicates whether operations are allowed on stale indexes.
+ private bool $allowStale = false;
+
+ // Limits the number of base operations per second allowed.
+ // DEFAULT: no limit
+ private ?int $maxOpsPerSecond = null;
+
+ // If AllowStale is set to false and index is stale,
+ // then this is the maximum timeout to wait for index to become non-stale.
+ // If timeout is exceeded then exception is thrown.
+ // DEFAULT: null (if index is stale then exception is thrown immediately)
+ private ?Duration $staleTimeout = null;
+
+ // Determines whether operation details about each document should be returned by server.
+ private bool $retrieveDetails = false;
+
+ // Ignore the maximum number of statements a script can execute.
+ // Note: this is only relevant for the patchByQueryOperation.
+ private bool $ignoreMaxStepsForScript = false;
+
+ // getters and setters
+\}
+`}
+
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/common/_delete-by-query-python.mdx b/versioned_docs/version-7.1/client-api/operations/common/_delete-by-query-python.mdx
new file mode 100644
index 0000000000..2a6827af39
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/common/_delete-by-query-python.mdx
@@ -0,0 +1,204 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `DeleteByQueryOperation` to delete a large number of documents that match the provided query in a single server call.
+
+* **Dynamic behavior**:
+ The deletion of documents matching the specified query is performed in batches of size 1024.
+ During the deletion process, documents that are added/modified **after** the delete operation has started
+ may also be deleted if they match the query criteria.
+
+* **Background operation**:
+ This operation is performed in the background on the server.
+
+* **Operation scope**:
+ `DeleteByQueryOperation` runs as a single-node transaction, not a cluster-wide transaction. As a result,
+ if you use this operation to delete documents that were originally created using a cluster-wide transaction,
+ their associated [Atomic guards](../../../client-api/session/cluster-transaction/atomic-guards.mdx) will Not be deleted.
+
+ * To avoid issues when recreating such documents using a cluster-wide session,
+ see [Best practice when storing a document](../../../client-api/session/cluster-transaction/atomic-guards.mdx#best-practice-when-storing-a-document-in-a-cluster-wide-transaction).
+ * To learn more about the differences between transaction types,
+ see [Cluster-wide transaction vs. Single-node transaction](../../../client-api/session/cluster-transaction/overview.mdx#cluster-wide-transaction-vs-single-node-transaction).
+* In this article:
+ * [Delete by dynamic query](../../../client-api/operations/common/delete-by-query.mdx#delete-by-dynamic-query)
+ * [Delete by index query](../../../client-api/operations/common/delete-by-query.mdx#delete-by-index-query)
+
+
+
+## Delete by dynamic query
+
+
+
+##### Delete all documents in a collection
+
+
+
+
+{`# Define the delete by query operation, pass an RQL querying a collection
+delete_by_query_op = DeleteByQueryOperation("from 'Orders'")
+
+# Execute the operation by passing it to Operation.send_async
+operation = store.operations.send_async(delete_by_query_op)
+
+# All documents in collection 'Orders' will be deleted from the server
+`}
+
+
+
+
+{`from "Orders"
+`}
+
+
+
+
+
+
+
+##### Delete with filtering
+
+
+
+
+{`# Define the delete by query operation, pass an RQL querying a collection
+delete_by_query_op = DeleteByQueryOperation("from 'Orders' where Freight > 30")
+
+# Execute the operation by passing it to Operation.send_async
+operation = store.operations.send_async(delete_by_query_op)
+
+# * All documents matching the specified RQL will be deleted from the server.
+#
+# * Since the dynamic query was made with a filtering condition,
+# an auto-index is generated (if no other matching auto-index already exists).
+`}
+
+
+
+
+{`from "Orders" where Freight > 30
+`}
+
+
+
+
+
+
+
+## Delete by index query
+
+* `DeleteByQueryOperation` can only be performed on a **Map-index**.
+ An exception is thrown when executing the operation on a Map-Reduce index.
+
+* A few overloads are available, see the following examples:
+
+
+##### A sample Map-index
+
+
+
+{`# The index definition:
+# =====================
+class ProductsByPrice(AbstractIndexCreationTask):
+ class IndexEntry:
+ def __init__(self, price: int):
+ self.price = price
+
+ def __init__(self):
+ super().__init__()
+ self.map = "from product in products select new \{price = product.PricePerUnit\}"
+`}
+
+
+
+
+
+
+##### Delete documents via an index query
+
+
+
+
+{`# Define the delete by query operation, pass an RQL querying the index
+delete_by_query_op = DeleteByQueryOperation("from index 'Products/ByPrice' where Price > 10")
+
+# Execute the operation by passing it to Operation.send_async
+operation = store.operations.send_async(delete_by_query_op)
+
+# All documents with document-field PricePerUnit > 10 will be deleted from the server.
+`}
+
+
+
+
+{`# Define the delete by query operation
+delete_by_query_op = DeleteByQueryOperation(
+ IndexQuery(query="from index 'Products/ByPrice' where Price > 10")
+)
+
+# Execute the operation by passing it to Operation.send_async
+operation = store.operations.send_async(delete_by_query_op)
+
+# All documents with document-field PricePerUnit > 10 will be deleted from the server.
+`}
+
+
+
+
+{`from index "Products/ByPrice" where Price > 10
+`}
+
+
+
+
+
+
+
+##### Delete with options
+
+
+
+
+{`# Define the delete by query operation
+delete_by_query_op = DeleteByQueryOperation(
+ # QUERY: Specify the query
+ IndexQuery(query="from index 'Products/ByPrice' where Price > 10"),
+ # OPTIONS: Specify the options for the operations
+ # (See all other available options in the Syntax section below)
+ QueryOperationOptions(
+ # Allow the operation to operate even if index is stale
+ allow_stale=True,
+ # Get info in the operation result about documents that were deleted
+ retrieve_details=True,
+ ),
+)
+
+# Execute the operation by passing it to Operations.send_async
+operation = store.operations.send_async(delete_by_query_op)
+
+# * All documents with document-field PricePerUnit > 10 will be deleted from the server
+
+# * Details about deleted documents are available:
+details = result.details
+document_id_that_was_deleted = details[0]["Id"]
+`}
+
+
+
+
+{`from index "Products/ByPrice" where Price > 10
+`}
+
+
+
+
+* Specifying `QueryOperationOptions` is also supported by the other overload methods, see the Syntax section below.
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/common/delete-by-query.mdx b/versioned_docs/version-7.1/client-api/operations/common/delete-by-query.mdx
new file mode 100644
index 0000000000..f242bdf5c7
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/common/delete-by-query.mdx
@@ -0,0 +1,55 @@
+---
+title: "Delete by Query Operation"
+hide_table_of_contents: true
+sidebar_label: Delete by Query
+sidebar_position: 0
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import DeleteByQueryCsharp from './_delete-by-query-csharp.mdx';
+import DeleteByQueryJava from './_delete-by-query-java.mdx';
+import DeleteByQueryPython from './_delete-by-query-python.mdx';
+import DeleteByQueryPhp from './_delete-by-query-php.mdx';
+import DeleteByQueryNodejs from './_delete-by-query-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "python", "php", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/counters/_category_.json b/versioned_docs/version-7.1/client-api/operations/counters/_category_.json
new file mode 100644
index 0000000000..1c2a845242
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/counters/_category_.json
@@ -0,0 +1,4 @@
+{
+ "position": 6,
+ "label": Counters,
+}
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/counters/_counter-batch-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/counters/_counter-batch-csharp.mdx
new file mode 100644
index 0000000000..37dea336d3
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/counters/_counter-batch-csharp.mdx
@@ -0,0 +1,444 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+*CounterBatchOperation* allows you to operate on multiple counters (`Increment`, `Get`, `Delete`) of different documents in a **single request**.
+
+## Syntax
+
+
+
+{`public CounterBatchOperation(CounterBatch counterBatch)
+`}
+
+
+
+| Parameter | | |
+|------------------|----------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **counterBatch** | `CounterBatch` | An object that holds a list of `DocumentCountersOperation`. Each element in the list describes the counter operations to perform for a specific document |
+
+
+
+{`public class CounterBatch
+\{
+ public bool ReplyWithAllNodesValues; // A flag that indicates if the results should include a
+ // dictionary of counter values per database node
+ public List Documents = new List();
+\}
+`}
+
+
+
+#### DocumentCountersOperation
+
+
+
+{`public class DocumentCountersOperation
+\{
+ public string DocumentId; // Id of the document that holds the counters
+ public List Operations; // A list of counter operations to perform
+\}
+`}
+
+
+
+#### CounterOperation
+
+
+
+{`public class CounterOperation
+\{
+ public CounterOperationType Type;
+ public string CounterName;
+ public long Delta; // the value to increment by
+\}
+`}
+
+
+
+#### CounterOperationType
+
+
+
+{`public enum CounterOperationType
+\{
+ Increment,
+ Delete,
+ Get
+\}
+`}
+
+
+
+
+A document that has counters holds all its counter names in the `metadata`.
+Therefore, when creating a new counter, the parent document is modified, as the counter's name needs to be added to the metadata.
+Deleting a counter also modifies the parent document, as the counter's name needs to be removed from the metadata.
+Incrementing an existing counter will not modify the parent document.
+
+Even if a `DocumentCountersOperation` contains several `CounterOperation` items that affect the document's metadata (create, delete),
+the parent document will be modified **only once**, after all the `CounterOperation` items in this `DocumentCountersOperation` have been processed.
+If `DocumentCountersOperation` doesn't contain any `CounterOperation` that affects the metadata, the parent document won't be modified.
+
+
+
+
+
+## Return Value
+
+* *CounterBatchOperation* returns a `CountersDetail` object, which holds a list of `CounterDetail` objects.
+
+* If a `CounterOperationType` is `Increment` or `Get`, a `CounterDetail` object will be added to the result.
+ `Delete` operations will not be included in the result.
+
+
+
+{`public class CountersDetail
+\{
+ public List Counters;
+\}
+`}
+
+
+
+
+
+{`public class CounterDetail
+\{
+ public string DocumentId; // ID of the document that holds the counter
+ public string CounterName; // The counter name
+ public long TotalValue; // Total counter value
+ public Dictionary CounterValues; // A dictionary of counter values per database node
+ public long Etag; // Counter Etag
+ public string ChangeVector; // Change vector of the counter
+\}
+`}
+
+
+
+
+
+## Examples
+
+Assume we have two documents, *"users/1"* and *"users/2"*, that hold 3 counters each -
+*"likes"*, *"dislikes"* and *"downloads"* - with values 10, 20 and 30 (respectively)
+### Example #1 : Increment Multiple Counters in a Batch
+
+
+
+{`var operationResult = store.Operations.Send(new CounterBatchOperation(new CounterBatch
+\{
+ Documents = new List
+ \{
+ new DocumentCountersOperation
+ \{
+ DocumentId = "users/1",
+ Operations = new List
+ \{
+ new CounterOperation
+ \{
+ Type = CounterOperationType.Increment,
+ CounterName = "likes",
+ Delta = 5
+ \},
+ new CounterOperation
+ \{
+ // No Delta specified, value will be incremented by 1
+ // (From RavenDB 6.2 on, the default Delta is 1)
+
+ Type = CounterOperationType.Increment,
+ CounterName = "dislikes"
+ \}
+ \}
+ \},
+ new DocumentCountersOperation
+ \{
+ DocumentId = "users/2",
+ Operations = new List
+ \{
+ new CounterOperation
+ \{
+ Type = CounterOperationType.Increment,
+ CounterName = "likes",
+ Delta = 100
+ \},
+ new CounterOperation
+ \{
+ // this will create a new counter "score", with initial value 50
+ // "score" will be added to counter-names in "users/2" metadata
+
+ Type = CounterOperationType.Increment,
+ CounterName = "score",
+ Delta = 50
+ \}
+ \}
+ \}
+ \}
+\}));
+`}
+
+
+
+#### Result:
+
+
+{`\{
+ "Counters":
+ [
+ \{
+ "DocumentId" : "users/1",
+ "CounterName" : "likes",
+ "TotalValue" : 15,
+ "CounterValues" : null
+ \},
+ \{
+ "DocumentId" : "users/1",
+ "CounterName" : "dislikes",
+ "TotalValue" : 20,
+ "CounterValues" : null
+ \},
+ \{
+ "DocumentId" : "users/2",
+ "CounterName" : "likes",
+ "TotalValue" : 110,
+ "CounterValues" : null
+ \},
+ \{
+ "DocumentId" : "users/2",
+ "CounterName" : "score",
+ "TotalValue" : 50,
+ "CounterValues" : null
+ \}
+ ]
+\}
+`}
+
+
+### Example #2 : Get Multiple Counters in a Batch
+
+
+
+{`var operationResult = store.Operations.Send(new CounterBatchOperation(new CounterBatch
+\{
+ Documents = new List
+ \{
+ new DocumentCountersOperation
+ \{
+ DocumentId = "users/1",
+ Operations = new List
+ \{
+ new CounterOperation
+ \{
+ Type = CounterOperationType.Get,
+ CounterName = "likes"
+ \},
+ new CounterOperation
+ \{
+ Type = CounterOperationType.Get,
+ CounterName = "downloads"
+ \}
+ \}
+ \},
+ new DocumentCountersOperation
+ \{
+ DocumentId = "users/2",
+ Operations = new List
+ \{
+ new CounterOperation
+ \{
+ Type = CounterOperationType.Get,
+ CounterName = "likes"
+ \},
+ new CounterOperation
+ \{
+ Type = CounterOperationType.Get,
+ CounterName = "score"
+ \}
+ \}
+ \}
+ \}
+\}));
+`}
+
+
+
+#### Result:
+
+
+
+{`\{
+ "Counters":
+ [
+ \{
+ "DocumentId" : "users/1",
+ "CounterName" : "likes",
+ "TotalValue" : 15,
+ "CounterValues" : null
+ \},
+ \{
+ "DocumentId" : "users/1",
+ "CounterName" : "downloads",
+ "TotalValue" : 30,
+ "CounterValues" : null
+ \},
+ \{
+ "DocumentId" : "users/2",
+ "CounterName" : "likes",
+ "TotalValue" : 110,
+ "CounterValues" : null
+ \},
+ \{
+ "DocumentId" : "users/2",
+ "CounterName" : "score",
+ "TotalValue" : 50,
+ "CounterValues" : null
+ \}
+ ]
+\}
+`}
+
+
+### Example #3 : Delete Multiple Counters in a Batch
+
+
+
+{`var operationResult = store.Operations.Send(new CounterBatchOperation(new CounterBatch
+\{
+ Documents = new List
+ \{
+ new DocumentCountersOperation
+ \{
+ DocumentId = "users/1",
+ Operations = new List
+ \{
+ // "likes" and "dislikes" will be removed from counter-names in "users/1" metadata
+ new CounterOperation
+ \{
+ Type = CounterOperationType.Delete,
+ CounterName = "likes"
+ \},
+ new CounterOperation
+ \{
+ Type = CounterOperationType.Delete,
+ CounterName = "dislikes"
+ \}
+ \}
+ \},
+ new DocumentCountersOperation
+ \{
+ DocumentId = "users/2",
+ Operations = new List
+ \{
+ // "downloads" will be removed from counter-names in "users/2" metadata
+
+ new CounterOperation
+ \{
+ Type = CounterOperationType.Delete,
+ CounterName = "downloads"
+ \}
+ \}
+ \}
+ \}
+\}));
+`}
+
+
+
+#### Result:
+
+
+
+{`\{
+ "Counters": []
+\}
+`}
+
+
+### Example #4 : Mix Different Types of CounterOperations in a Batch
+
+
+
+{`var operationResult = store.Operations.Send(new CounterBatchOperation(new CounterBatch
+\{
+ Documents = new List
+ \{
+ new DocumentCountersOperation
+ \{
+ DocumentId = "users/1",
+ Operations = new List
+ \{
+ new CounterOperation
+ \{
+ Type = CounterOperationType.Increment,
+ CounterName = "likes",
+ Delta = 30
+ \},
+ new CounterOperation
+ \{
+ // The results will include null for this 'Get'
+ // since we deleted the "dislikes" counter in the previous example flow
+ Type = CounterOperationType.Get,
+ CounterName = "dislikes"
+ \},
+ new CounterOperation
+ \{
+ Type = CounterOperationType.Delete,
+ CounterName = "downloads"
+ \}
+ \}
+ \},
+ new DocumentCountersOperation
+ \{
+ DocumentId = "users/2",
+ Operations = new List
+ \{
+ new CounterOperation
+ \{
+ Type = CounterOperationType.Get,
+ CounterName = "likes"
+ \},
+ new CounterOperation
+ \{
+ Type = CounterOperationType.Delete,
+ CounterName = "dislikes"
+ \}
+ \}
+ \}
+ \}
+\}));
+`}
+
+
+
+#### Result:
+
+* Note: The `Delete` operations are Not included in the results.
+
+
+
+{`\{
+ "Counters":
+ [
+ \{
+ "DocumentId" : "users/1",
+ "CounterName" : "likes",
+ "TotalValue" : 30,
+ "CounterValues" : null
+ \},
+ null,
+ \{
+ "DocumentId" : "users/2",
+ "CounterName" : "likes",
+ "TotalValue" : 110,
+ "CounterValues" : null
+ \}
+ ]
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/counters/_counter-batch-java.mdx b/versioned_docs/version-7.1/client-api/operations/counters/_counter-batch-java.mdx
new file mode 100644
index 0000000000..735c58cfa0
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/counters/_counter-batch-java.mdx
@@ -0,0 +1,352 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+*CounterBatchOperation* allows you to operate on multiple counters (`INCREMENT`, `GET`, `DELETE`) of different documents in a **single request**.
+
+## Syntax
+
+
+
+{`public CounterBatchOperation(CounterBatch counterBatch)
+`}
+
+
+
+| Parameter | | |
+|------------------|----------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **counterBatch** | `CounterBatch` | An object that holds a list of `DocumentCountersOperation`. Each element in the list describes the counter operations to perform for a specific document |
+
+
+
+{`public class CounterBatch \{
+ private boolean replyWithAllNodesValues;
+ private List documents = new ArrayList<>();
+
+ // getters and setters
+\}
+`}
+
+
+
+#### DocumentCountersOperation
+
+
+
+{`public class DocumentCountersOperation \{
+ private List operations;
+ private String documentId;
+
+ // getters and setters
+\}
+`}
+
+
+
+#### CounterOperation
+
+
+
+{`public static class CounterOperation \{
+ private CounterOperationType type;
+ private String counterName;
+ private long delta; // the value to increment by
+
+ // getters and setters
+\}
+`}
+
+
+
+#### CounterOperationType
+
+
+
+{`public enum CounterOperationType \{
+ NONE,
+ INCREMENT,
+ DELETE,
+ GET,
+ PUT
+\}
+`}
+
+
+
+
+A document that has counters holds all its counter names in the `metadata`.
+Therefore, when creating a new counter, the parent document is modified, as the counter's name needs to be added to the metadata.
+Deleting a counter also modifies the parent document, as the counter's name needs to be removed from the metadata.
+Incrementing an existing counter will not modify the parent document.
+
+Even if a `DocumentCountersOperation` contains several `CounterOperation` items that affect the document's metadata (create, delete),
+the parent document will be modified **only once**, after all the `CounterOperation` items in this `DocumentCountersOperation` have been processed.
+If `DocumentCountersOperation` doesn't contain any `CounterOperation` that affects the metadata, the parent document won't be modified.
+
+
+
+
+
+## Return Value
+
+* *CounterBatchOperation* returns a `CountersDetail` object, which holds a list of `CounterDetail` objects.
+
+* If a `CounterOperationType` is `INCREMENT` or `GET`, a `CounterDetail` object will be added to the result.
+ `DELETE` operations will not be included in the result.
+
+
+
+{`public class CountersDetail \{
+
+ private List counters;
+
+ // getters and setters
+\}
+`}
+
+
+
+
+
+{`public class CounterDetail \{
+ private String documentId; // ID of the document that holds the counter
+ private String counterName; // The counter name
+ private long totalValue; // Total counter value
+ private long etag; // Counter Etag
+ private Map counterValues; // A map of counter values per database node
+
+ private String changeVector; // Change vector of the counter
+
+ // getters and setters
+\}
+`}
+
+
+
+
+
+## Examples
+
+Assume we have two documents, *"users/1"* and *"users/2"*, that hold 3 counters each -
+*"likes"*, *"dislikes"* and *"downloads"* - with values 10, 20 and 30 (respectively)
+### Example #1 : Increment Multiple Counters in a Batch
+
+
+
+{`DocumentCountersOperation operation1 = new DocumentCountersOperation();
+operation1.setDocumentId("users/1");
+operation1.setOperations(Arrays.asList(
+ CounterOperation.create("likes", CounterOperationType.INCREMENT, 5),
+ CounterOperation.create("dislikes", CounterOperationType.INCREMENT) // No delta specified, value will stay the same
+));
+
+DocumentCountersOperation operation2 = new DocumentCountersOperation();
+operation2.setDocumentId("users/2");
+operation2.setOperations(Arrays.asList(
+ CounterOperation.create("likes", CounterOperationType.INCREMENT, 100),
+
+ // this will create a new counter "score", with initial value 50
+ // "score" will be added to counter-names in "users/2" metadata
+ CounterOperation.create("score", CounterOperationType.INCREMENT, 50)
+));
+
+CounterBatch counterBatch = new CounterBatch();
+counterBatch.setDocuments(Arrays.asList(operation1, operation2));
+store.operations().send(new CounterBatchOperation(counterBatch));
+`}
+
+
+
+#### Result:
+
+
+
+{`\{
+ "Counters":
+ [
+ \{
+ "DocumentId" : "users/1",
+ "CounterName" : "likes",
+ "TotalValue" : 15,
+ "CounterValues" : null
+ \},
+ \{
+ "DocumentId" : "users/1",
+ "CounterName" : "dislikes",
+ "TotalValue" : 20,
+ "CounterValues" : null
+ \},
+ \{
+ "DocumentId" : "users/2",
+ "CounterName" : "likes",
+ "TotalValue" : 110,
+ "CounterValues" : null
+ \},
+ \{
+ "DocumentId" : "users/2",
+ "CounterName" : "score",
+ "TotalValue" : 50,
+ "CounterValues" : null
+ \}
+ ]
+\}
+`}
+
+
+### Example #2 : Get Multiple Counters in a Batch
+
+
+
+{`DocumentCountersOperation operation1 = new DocumentCountersOperation();
+operation1.setDocumentId("users/1");
+operation1.setOperations(Arrays.asList(
+ CounterOperation.create("likes", CounterOperationType.GET),
+ CounterOperation.create("downloads", CounterOperationType.GET)
+));
+
+DocumentCountersOperation operation2 = new DocumentCountersOperation();
+operation2.setDocumentId("users/2");
+operation2.setOperations(Arrays.asList(
+ CounterOperation.create("likes", CounterOperationType.GET),
+ CounterOperation.create("score", CounterOperationType.GET)
+));
+
+CounterBatch counterBatch = new CounterBatch();
+counterBatch.setDocuments(Arrays.asList(operation1, operation2));
+
+store.operations().send(new CounterBatchOperation(counterBatch));
+`}
+
+
+
+#### Result:
+
+
+{`\{
+ "Counters":
+ [
+ \{
+ "DocumentId" : "users/1",
+ "CounterName" : "likes",
+ "TotalValue" : 15,
+ "CounterValues" : null
+ \},
+ \{
+ "DocumentId" : "users/1",
+ "CounterName" : "downloads",
+ "TotalValue" : 30,
+ "CounterValues" : null
+ \},
+ \{
+ "DocumentId" : "users/2",
+ "CounterName" : "likes",
+ "TotalValue" : 110,
+ "CounterValues" : null
+ \},
+ \{
+ "DocumentId" : "users/2",
+ "CounterName" : "score",
+ "TotalValue" : 50,
+ "CounterValues" : null
+ \}
+ ]
+\}
+`}
+
+
+### Example #3 : Delete Multiple Counters in a Batch
+
+
+
+{`DocumentCountersOperation operation1 = new DocumentCountersOperation();
+operation1.setDocumentId("users/1");
+operation1.setOperations(Arrays.asList(
+ // "likes" and "dislikes" will be removed from counter-names in "users/1" metadata
+ CounterOperation.create("likes", CounterOperationType.DELETE),
+ CounterOperation.create("dislikes", CounterOperationType.DELETE)
+));
+
+DocumentCountersOperation operation2 = new DocumentCountersOperation();
+operation2.setDocumentId("users/2");
+operation2.setOperations(Arrays.asList(
+ // "downloads" will be removed from counter-names in "users/2" metadata
+ CounterOperation.create("downloads", CounterOperationType.DELETE)
+));
+
+CounterBatch counterBatch = new CounterBatch();
+counterBatch.setDocuments(Arrays.asList(operation1, operation2));
+store.operations().send(new CounterBatchOperation(counterBatch));
+`}
+
+
+
+#### Result:
+
+
+
+{`\{
+ "Counters": []
+\}
+`}
+
+
+### Example #4 : Mix Different Types of CounterOperations in a Batch
+
+
+
+{`DocumentCountersOperation operation1 = new DocumentCountersOperation();
+operation1.setDocumentId("users/1");
+operation1.setOperations(Arrays.asList(
+ CounterOperation.create("likes", CounterOperationType.INCREMENT, 30),
+ // The results will include null for this 'Get'
+ // since we deleted the "dislikes" counter in the previous example flow
+ CounterOperation.create("dislikes", CounterOperationType.GET),
+ CounterOperation.create("downloads", CounterOperationType.DELETE)
+));
+
+DocumentCountersOperation operation2 = new DocumentCountersOperation();
+operation2.setDocumentId("users/2");
+operation2.setOperations(Arrays.asList(
+ CounterOperation.create("likes", CounterOperationType.GET),
+ CounterOperation.create("dislikes", CounterOperationType.DELETE)
+));
+
+CounterBatch counterBatch = new CounterBatch();
+counterBatch.setDocuments(Arrays.asList(operation1, operation2));
+store.operations().send(new CounterBatchOperation(counterBatch));
+`}
+
+
+
+#### Result:
+
+* Note: The `Delete` operations are Not included in the result.
+
+
+
+{`\{
+ "Counters":
+ [
+ \{
+ "DocumentId" : "users/1",
+ "CounterName" : "likes",
+ "TotalValue" : 30,
+ "CounterValues" : null
+ \},
+ null,
+ \{
+ "DocumentId" : "users/2",
+ "CounterName" : "likes",
+ "TotalValue" : 110,
+ "CounterValues" : null
+ \}
+ ]
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/counters/_counter-batch-nodejs.mdx b/versioned_docs/version-7.1/client-api/operations/counters/_counter-batch-nodejs.mdx
new file mode 100644
index 0000000000..58cf903b70
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/counters/_counter-batch-nodejs.mdx
@@ -0,0 +1,399 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+*CounterBatchOperation* allows you to operate on multiple counters (`Increment`, `Get`, `Delete`) of different documents in a **single request**.
+
+## Syntax
+
+
+
+{`const counterBatchOp = new CounterBatchOperation(counterBatch);
+`}
+
+
+
+| Parameter | | |
+|------------------|----------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **counterBatch** | `CounterBatch` | An object that holds a list of `DocumentCountersOperation`. Each element in the list describes the counter operations to perform for a specific document |
+
+
+
+{`// The CounterBatch object:
+// ========================
+\{
+ // A list of "DocumentCountersOperation" objects
+ documents;
+ // A flag indicating if results should include a dictionary of counter values per database node
+ replyWithAllNodesValues;
+\}
+`}
+
+
+
+
+
+{`// The DocumentCountersOperation object:
+// =====================================
+\{
+ // Id of the document that holds the counters
+ documentId;
+ // A list of "CounterOperation" objects to perform
+ operations;
+\}
+`}
+
+
+
+
+
+{`// The CounterOperation object:
+// ============================
+\{
+ // The operation type: "Increment" | "Delete" | "Get"
+ type;
+ // The counter name
+ counterName;
+ // The value to increment by
+ delta;
+\}
+`}
+
+
+
+
+A document that has counters holds all its counter names in the `metadata`.
+Therefore, when creating a new counter, the parent document is modified, as the counter's name needs to be added to the metadata.
+Deleting a counter also modifies the parent document, as the counter's name needs to be removed from the metadata.
+Incrementing an existing counter will not modify the parent document.
+
+Even if a `DocumentCountersOperation` contains several `CounterOperation` items that affect the document's metadata (create, delete),
+the parent document will be modified **only once**, after all the `CounterOperation` items in this `DocumentCountersOperation` have been processed.
+If `DocumentCountersOperation` doesn't contain any `CounterOperation` that affects the metadata, the parent document won't be modified.
+
+
+
+
+
+## Return Value
+
+* *CounterBatchOperation* returns a `CountersDetail` object, which holds a list of `CounterDetail` objects.
+
+* If the type is `Increment` or `Get`, a `CounterDetail` object will be added to the result.
+ `Delete` operations will Not be included in the result.
+
+
+
+{`// The CounterDetails object:
+// ==========================
+\{
+ // A list of "CounterDetail" objects;
+ counters;
+\}
+`}
+
+
+
+
+
+{`// The CounterDetail object:
+// =========================
+\{
+ // ID of the document that holds the counter;
+ documentId; // string
+
+ // The counter name
+ counterName; //string
+
+ // Total counter value
+ totalValue; // number
+
+ // A dictionary of counter values per database node
+ counterValues?;
+
+ // Etag of counter
+ etag?; // number;
+
+ // Change vector of counter
+ changeVector?; // string
+\}
+`}
+
+
+
+
+
+## Examples
+
+Assume we have two documents, `users/1` and `users/2`, that hold 3 counters each -
+_"Likes"_, _"Dislikes"_ and _"Downloads"_ - with values 10, 20 and 30 (respectively)
+### Example #1 : Increment Multiple Counters in a Batch
+
+
+
+{`// Define the counter actions you want to make per document:
+// =========================================================
+
+const counterActions1 = new DocumentCountersOperation();
+counterActions1.documentId = "users/1";
+counterActions1.operations = [
+ CounterOperation.create("Likes", "Increment", 5), // Increment "Likes" by 5
+ CounterOperation.create("Dislikes", "Increment") // No delta specified, value will stay the same
+];
+
+const counterActions2 = new DocumentCountersOperation();
+counterActions2.documentId = "users/2";
+counterActions2.operations = [
+ CounterOperation.create("Likes", "Increment", 100), // Increment "Likes" by 100
+ CounterOperation.create("Score", "Increment", 50) // Create a new counter "Score" with value 50
+];
+
+// Define the batch:
+// =================
+const batch = new CounterBatch();
+batch.documents = [counterActions1, counterActions2];
+
+// Define the counter batch operation, pass the batch:
+// ===================================================
+const counterBatchOp = new CounterBatchOperation(batch);
+
+// Execute the operation by passing it to operations.send:
+// =======================================================
+const result = await documentStore.operations.send(counterBatchOp);
+const counters = result.counters;
+`}
+
+
+
+#### Result:
+
+
+
+{`\{
+ "counters":
+ [
+ \{
+ "documentId" : "users/1",
+ "counterName" : "Likes",
+ "totalValue" : 15,
+ "counterValues" : null
+ \},
+ \{
+ "documentId" : "users/1",
+ "counterName" : "Dislikes",
+ "totalValue" : 20,
+ "counterValues" : null
+ \},
+ \{
+ "documentId" : "users/2",
+ "counterName" : "Likes",
+ "totalValue" : 110,
+ "counterValues" : null
+ \},
+ \{
+ "documentId" : "users/2",
+ "counterName" : "score",
+ "totalValue" : 50,
+ "counterValues" : null
+ \}
+ ]
+\}
+`}
+
+
+### Example #2 : Get Multiple Counters in a Batch
+
+
+
+{`// Define the counter actions you want to make per document:
+// =========================================================
+
+const counterActions1 = new DocumentCountersOperation();
+counterActions1.documentId = "users/1";
+counterActions1.operations = [
+ CounterOperation.create("Likes", "Get"),
+ CounterOperation.create("Downloads", "Get")
+];
+
+const counterActions2 = new DocumentCountersOperation();
+counterActions2.documentId = "users/2";
+counterActions2.operations = [
+ CounterOperation.create("Likes", "Get"),
+ CounterOperation.create("Score", "Get")
+];
+
+// Define the batch:
+// =================
+const batch = new CounterBatch();
+batch.documents = [counterActions1, counterActions2];
+
+// Define the counter batch operation, pass the batch:
+// ===================================================
+const counterBatchOp = new CounterBatchOperation(batch);
+
+// Execute the operation by passing it to operations.send:
+// =======================================================
+const result = await documentStore.operations.send(counterBatchOp);
+const counters = result.counters;
+`}
+
+
+
+#### Result:
+
+
+
+{`\{
+ "counters":
+ [
+ \{
+ "documentId" : "users/1",
+ "counterName" : "Likes",
+ "totalValue" : 15,
+ "counterValues" : null
+ \},
+ \{
+ "documentId" : "users/1",
+ "counterName" : "Downloads",
+ "totalValue" : 30,
+ "counterValues" : null
+ \},
+ \{
+ "documentId" : "users/2",
+ "counterName" : "Likes",
+ "totalValue" : 110,
+ "counterValues" : null
+ \},
+ \{
+ "documentId" : "users/2",
+ "counterName" : "Score",
+ "totalValue" : 50,
+ "counterValues" : null
+ \}
+ ]
+\}
+`}
+
+
+### Example #3 : Delete Multiple Counters in a Batch
+
+
+
+{`// Define the counter actions you want to make per document:
+// =========================================================
+
+const counterActions1 = new DocumentCountersOperation();
+counterActions1.documentId = "users/1";
+counterActions1.operations = [
+ // "Likes" and "Dislikes" will be removed from counter-names in "users/1" metadata
+ CounterOperation.create("Likes", "Delete"),
+ CounterOperation.create("Dislikes", "Delete")
+];
+
+const counterActions2 = new DocumentCountersOperation();
+counterActions2.documentId = "users/2";
+counterActions2.operations = [
+ // "Downloads" will be removed from counter-names in "users/2" metadata
+ CounterOperation.create("Downloads", "Delete")
+];
+
+// Define the batch:
+// =================
+const batch = new CounterBatch();
+batch.documents = [counterActions1, counterActions2];
+
+// Define the counter batch operation, pass the batch:
+// ===================================================
+const counterBatchOp = new CounterBatchOperation(batch);
+
+// Execute the operation by passing it to operations.send:
+// =======================================================
+const result = await documentStore.operations.send(counterBatchOp);
+const counters = result.counters;
+`}
+
+
+
+#### Result:
+
+
+
+{`\{
+ "counters": []
+\}
+`}
+
+
+### Example #4 : Mix Different Types of CounterOperations in a Batch
+
+
+
+{`// Define the counter actions you want to make per document:
+// =========================================================
+
+const counterActions1 = new DocumentCountersOperation();
+counterActions1.documentId = "users/1";
+counterActions1.operations = [
+ CounterOperation.create("Likes", "Increment", 30),
+ // The results will include null for this 'Get'
+ // since we deleted the "Dislikes" counter in the previous example flow
+ CounterOperation.create("Dislikes", "Get"),
+ CounterOperation.create("Downloads", "Delete")
+];
+
+const counterActions2 = new DocumentCountersOperation();
+counterActions2.documentId = "users/2";
+counterActions2.operations = [
+ CounterOperation.create("Likes", "Get"),
+ CounterOperation.create("Dislikes", "Delete")
+];
+
+// Define the batch:
+// =================
+const batch = new CounterBatch();
+batch.documents = [counterActions1, counterActions2];
+
+// Define the counter batch operation, pass the batch:
+// ===================================================
+const counterBatchOp = new CounterBatchOperation(batch);
+
+// Execute the operation by passing it to operations.send:
+// =======================================================
+const result = await documentStore.operations.send(counterBatchOp);
+const counters = result.counters;
+`}
+
+
+
+#### Result:
+
+* Note: The `Delete` operations are Not included in the result.
+
+
+
+{`\{
+ "counters":
+ [
+ \{
+ "documentId" : "users/1",
+ "counterName" : "Likes",
+ "totalValue" : 30,
+ "counterValues" : null
+ \},
+ null,
+ \{
+ "documentId" : "users/2",
+ "counterName" : "Likes",
+ "totalValue" : 110,
+ "counterValues" : null
+ \}
+ ]
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/counters/_counter-batch-php.mdx b/versioned_docs/version-7.1/client-api/operations/counters/_counter-batch-php.mdx
new file mode 100644
index 0000000000..781d0c0112
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/counters/_counter-batch-php.mdx
@@ -0,0 +1,374 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+*CounterBatchOperation* allows you to operate on multiple counters (`Increment`, `Get`, `Delete`) of different documents in a **single request**.
+
+## Syntax
+
+
+
+{`class CounterBatchOperation
+\{
+ public function __construct(CounterBatch $counterBatch) \{ ... \}
+\}
+`}
+
+
+
+| Parameter | | |
+|------------------|----------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **counterBatch** | `CounterBatch` | An object that holds a list of `DocumentCountersOperation`. Each element in the list describes the counter operations to perform for a specific document |
+
+
+
+{`class CounterBatch
+\{
+ private bool $replyWithAllNodesValues = false; // A flag that indicates if the results should include a
+ // dictionary of counter values per database node
+
+ private ?DocumentCountersOperationList $documents = null;
+
+ private bool $fromEtl = false;
+
+ // ... getter and setters
+\}
+`}
+
+
+
+#### DocumentCountersOperation
+
+
+
+{`class DocumentCountersOperation
+\{
+ private ?CounterOperationList $operations = null; // A list of counter operations to perform
+ private ?string $documentId = null; // Id of the document that holds the counters
+\}
+`}
+
+
+
+#### CounterOperation
+
+
+
+{`/*
+class CounterOperation
+\{
+ private ?CounterOperationType $type = null;
+ private ?string $counterName = null;
+ private ?int $delta = null; // the value to increment by
+\}
+`}
+
+
+
+#### CounterOperationType
+
+
+
+{`class CounterOperationType
+\{
+ public function isIncrement(): bool;
+ public static function increment(): CounterOperationType;
+
+ public function isDelete(): bool;
+ public static function delete(): CounterOperationType;
+
+ public function isGet(): bool;
+ public static function get(): CounterOperationType;
+
+ public function isPut(): bool;
+ public static function put(): CounterOperationType;
+\}
+`}
+
+
+
+
+A document that has counters holds all its counter names in the `metadata`.
+Therefore, when creating a new counter, the parent document is modified, as the counter's name needs to be added to the metadata.
+Deleting a counter also modifies the parent document, as the counter's name needs to be removed from the metadata.
+Incrementing an existing counter will not modify the parent document.
+
+Even if a `DocumentCountersOperation` contains several `CounterOperation` items that affect the document's metadata (create, delete),
+the parent document will be modified **only once**, after all the `CounterOperation` items in this `DocumentCountersOperation` have been processed.
+If `DocumentCountersOperation` doesn't contain any `CounterOperation` that affects the metadata, the parent document won't be modified.
+
+
+
+
+
+## Return Value
+
+* *CounterBatchOperation* returns a `CountersDetail` object, which holds a list of `CounterDetail` objects.
+
+* If a `CounterOperationType` is `Increment` or `Get`, a `CounterDetail` object will be added to the result.
+ `Delete` operations will not be included in the result.
+
+
+
+{`class CountersDetail
+\{
+ private ?CounterDetailList $counters = null;
+\}
+`}
+
+
+
+
+
+{`class CounterDetail
+\{
+ private ?string $documentId = null; // ID of the document that holds the counter
+ private ?string $counterName = null; // The counter name
+ private ?int $totalValue = null; // Total counter value
+ private ?int $etag = null; // Counter Etag
+ private ?array $counterValues = []; // A dictionary of counter values per database node
+
+ private ?string $changeVector = null; // Change vector of the counter
+
+ // ... getters and setters
+\}
+
+class CounterDetailList extends TypedList
+\{
+ public function __construct()
+ \{
+ parent::__construct(CounterDetail::class);
+ $this->setNullAllowed(true);
+ \}
+\}
+`}
+
+
+
+
+
+## Examples
+
+Assume we have two documents, `users/1` and `users/2`, that hold 3 counters each:
+`likes`, `dislikes` and `downloads` - with values 10, 20 and 30 (respectively)
+### Example #1 : Increment Multiple Counters in a Batch
+
+
+
+{`$operation1 = new DocumentCountersOperation();
+$operation1->setDocumentId("users/1");
+$operation1->setOperations([
+ CounterOperation::create("likes", CounterOperationType::increment(), 5),
+ CounterOperation::create("dislikes", CounterOperationType::increment()) // No delta specified, value will stay the same
+]);
+
+$operation2 = new DocumentCountersOperation();
+$operation2->setDocumentId("users/2");
+$operation2->setOperations([
+ CounterOperation::create("likes", CounterOperationType::increment(), 100),
+
+ // this will create a new counter "score", with initial value 50
+ // "score" will be added to counter-names in "users/2" metadata
+ CounterOperation::create("score", CounterOperationType::increment(), 50)
+]);
+
+$counterBatch = new CounterBatch();
+$counterBatch->setDocuments([$operation1, $operation2]);
+$store->operations()->send(new CounterBatchOperation($counterBatch));
+`}
+
+
+
+#### Result:
+
+
+{`\{
+ "Counters":
+ [
+ \{
+ "DocumentId" : "users/1",
+ "CounterName" : "likes",
+ "TotalValue" : 15,
+ "CounterValues" : null
+ \},
+ \{
+ "DocumentId" : "users/1",
+ "CounterName" : "dislikes",
+ "TotalValue" : 20,
+ "CounterValues" : null
+ \},
+ \{
+ "DocumentId" : "users/2",
+ "CounterName" : "likes",
+ "TotalValue" : 110,
+ "CounterValues" : null
+ \},
+ \{
+ "DocumentId" : "users/2",
+ "CounterName" : "score",
+ "TotalValue" : 50,
+ "CounterValues" : null
+ \}
+ ]
+\}
+`}
+
+
+### Example #2 : Get Multiple Counters in a Batch
+
+
+
+{`$operation1 = new DocumentCountersOperation();
+$operation1->setDocumentId("users/1");
+$operation1->setOperations([
+ CounterOperation::create("likes", CounterOperationType::get()),
+ CounterOperation::create("downloads", CounterOperationType::get())
+]);
+
+$operation2 = new DocumentCountersOperation();
+$operation2->setDocumentId("users/2");
+$operation2->setOperations([
+ CounterOperation::create("likes", CounterOperationType::get()),
+ CounterOperation::create("score", CounterOperationType::get())
+]);
+
+$counterBatch = new CounterBatch();
+$counterBatch->setDocuments([$operation1, $operation2]);
+
+$store->operations()->send(new CounterBatchOperation($counterBatch));
+`}
+
+
+
+#### Result:
+
+
+
+{`\{
+ "Counters":
+ [
+ \{
+ "DocumentId" : "users/1",
+ "CounterName" : "likes",
+ "TotalValue" : 15,
+ "CounterValues" : null
+ \},
+ \{
+ "DocumentId" : "users/1",
+ "CounterName" : "downloads",
+ "TotalValue" : 30,
+ "CounterValues" : null
+ \},
+ \{
+ "DocumentId" : "users/2",
+ "CounterName" : "likes",
+ "TotalValue" : 110,
+ "CounterValues" : null
+ \},
+ \{
+ "DocumentId" : "users/2",
+ "CounterName" : "score",
+ "TotalValue" : 50,
+ "CounterValues" : null
+ \}
+ ]
+\}
+`}
+
+
+### Example #3 : Delete Multiple Counters in a Batch
+
+
+
+{`$operation1 = new DocumentCountersOperation();
+$operation1->setDocumentId("users/1");
+$operation1->setOperations([
+ // "likes" and "dislikes" will be removed from counter-names in "users/1" metadata
+ CounterOperation::create("likes", CounterOperationType::delete()),
+ CounterOperation::create("dislikes", CounterOperationType::delete())
+]);
+
+$operation2 = new DocumentCountersOperation();
+$operation2->setDocumentId("users/2");
+$operation2->setOperations([
+ // "downloads" will be removed from counter-names in "users/2" metadata
+ CounterOperation::create("downloads", CounterOperationType::delete())
+]);
+
+$counterBatch = new CounterBatch();
+$counterBatch->setDocuments([$operation1, $operation2]);
+$store->operations()->send(new CounterBatchOperation($counterBatch));
+`}
+
+
+
+#### Result:
+
+
+
+{`\{
+ "Counters": []
+\}
+`}
+
+
+### Example #4 : Mix Different Types of CounterOperations in a Batch
+
+
+
+{`$operation1 = new DocumentCountersOperation();
+$operation1->setDocumentId("users/1");
+$operation1->setOperations([
+ CounterOperation::create("likes", CounterOperationType::increment(), 30),
+ // The results will include null for this 'Get'
+ // since we deleted the "dislikes" counter in the previous example flow
+ CounterOperation::create("dislikes", CounterOperationType::get()),
+ CounterOperation::create("downloads", CounterOperationType::delete())
+]);
+
+$operation2 = new DocumentCountersOperation();
+$operation2->setDocumentId("users/2");
+$operation2->setOperations([
+ CounterOperation::create("likes", CounterOperationType::get()),
+ CounterOperation::create("dislikes", CounterOperationType::delete())
+]);
+
+$counterBatch = new CounterBatch();
+$counterBatch->setDocuments([$operation1, $operation2]);
+$store->operations()->send(new CounterBatchOperation($counterBatch));
+`}
+
+
+
+#### Result:
+
+* Note: The `Delete` operations are Not included in the results.
+
+
+
+{`\{
+ "Counters":
+ [
+ \{
+ "DocumentId" : "users/1",
+ "CounterName" : "likes",
+ "TotalValue" : 30,
+ "CounterValues" : null
+ \},
+ null,
+ \{
+ "DocumentId" : "users/2",
+ "CounterName" : "likes",
+ "TotalValue" : 110,
+ "CounterValues" : null
+ \}
+ ]
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/counters/_get-counters-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/counters/_get-counters-csharp.mdx
new file mode 100644
index 0000000000..4159ee013b
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/counters/_get-counters-csharp.mdx
@@ -0,0 +1,245 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+This operation is used to get counters' values for a specific document.
+It can be used to get the value of a single counter, multiple counters' values, or all counters' values.
+
+## Syntax
+
+#### Get Single Counter
+
+
+
+{`public GetCountersOperation(string docId, string counter, bool returnFullResults = false)
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **docId** | string | The ID of the document that holds the counters |
+| **counter** | string | The name of the counter to get |
+| **returnFullResults** | bool | A flag which indicates if the operation should include a dictionary of counter values per database node in the result |
+
+
+
+**Return Full Results flag:**
+
+If RavenDB is running in a distributed cluster, and the database resides on several nodes,
+a counter can have a different *local* value on each database node, and the total counter value is the
+sum of all the local values of this counter from each node.
+In order to get the counter values per database node, set the `returnFullResults` flag to `true`
+
+
+#### Get Multiple Counters
+
+
+
+{`public GetCountersOperation(string docId, string[] counters, bool returnFullResults = false)
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **docId** | string | The ID of the document that holds the counters |
+| **counters** | string[] | The names of the counters to get |
+| **returnFullResults** | bool | A flag which indicates if the operation should include a dictionary of counter values per database node in the result |
+#### Get All Counters of a Document
+
+
+
+{`public GetCountersOperation(string docId, bool returnFullResults = false)
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **docId** | string | The ID of the document that holds the counters |
+| **returnFullResults** | bool | A flag which indicates if the operation should include a dictionary of counter values per database node in the result |
+
+
+
+## Return Value
+
+The operation returns a `CountersDetail` object, which holds a list of `CounterDetail` objects
+
+
+
+{`public class CountersDetail
+\{
+ public List Counters;
+\}
+`}
+
+
+
+
+
+{`public class CounterDetail
+\{
+ public string DocumentId; // ID of the document that holds the counter
+ public string CounterName; // The counter name
+ public long TotalValue; // Total counter value
+ public Dictionary CounterValues; // A dictionary of counter values per database node
+ public long Etag; // Counter Etag
+ public string ChangeVector; // Change vector of the counter
+\}
+`}
+
+
+
+
+
+## Examples
+
+Assume we have a `users/1` document that holds 3 counters:
+`likes`, `dislikes` and `downloads` - with values 10, 20 and 30 (respectively)
+
+### Example #1 : Get single counter
+
+
+
+{`var operationResult = store.Operations
+ .Send(new GetCountersOperation("users/1", "likes"));
+`}
+
+
+
+#### Result:
+
+
+
+{`\{
+ "Counters":
+ [
+ \{
+ "DocumentId" : "users/1",
+ "CounterName" : "likes",
+ "TotalValue" : 10,
+ "CounterValues" : null
+ \}
+ ]
+\}
+`}
+
+
+
+### Example #2 : Get multiple counters
+
+
+
+{`var operationResult = store.Operations
+ .Send(new GetCountersOperation("users/1", new []\{"likes", "dislikes" \}));
+`}
+
+
+
+#### Result:
+
+
+
+{`\{
+ "Counters":
+ [
+ \{
+ "DocumentId" : "users/1",
+ "CounterName" : "likes",
+ "TotalValue" : 10,
+ "CounterValues" : null
+ \},
+ \{
+ "DocumentId" : "users/1",
+ "CounterName" : "dislikes",
+ "TotalValue" : 20,
+ "CounterValues" : null
+ \}
+ ]
+\}
+`}
+
+
+
+### Example #3 : Get all counters
+
+
+
+{`var operationResult = store.Operations
+ .Send(new GetCountersOperation("users/1"));
+`}
+
+
+
+#### Result:
+
+
+
+{`\{
+ "Counters":
+ [
+ \{
+ "DocumentId" : "users/1",
+ "CounterName" : "likes",
+ "TotalValue" : 10,
+ "CounterValues" : null
+ \},
+ \{
+ "DocumentId" : "users/1",
+ "CounterName" : "dislikes",
+ "TotalValue" : 20,
+ "CounterValues" : null
+ \},
+ \{
+ "DocumentId" : "users/1",
+ "CounterName" : "downloads",
+ "TotalValue" : 30,
+ "CounterValues" : null
+ \}
+ ]
+\}
+`}
+
+
+
+### Example #4 : Include full values in the result
+
+
+
+{`var operationResult = store.Operations
+ .Send(new GetCountersOperation("users/1", "likes", true));
+`}
+
+
+
+#### Result:
+
+Assuming a 3-node cluster, the distribution of the counter's value to nodes A, B, and C could be as follows:
+
+
+
+{`\{
+ "Counters":
+ [
+ \{
+ "DocumentId" : "users/1",
+ "CounterName" : "likes",
+ "TotalValue" : 10,
+ "CounterValues" :
+ \{
+ "A:35-UuCp420vs0u+URADcGVURA" : 5,
+ "B:83-SeCFU29daUOxfjUcAlLiJw" : 3,
+ "C:27-7i7GP8bOOkGYLNflO/rSeg" : 2,
+ \}
+ \}
+ ]
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/counters/_get-counters-java.mdx b/versioned_docs/version-7.1/client-api/operations/counters/_get-counters-java.mdx
new file mode 100644
index 0000000000..4fed461c61
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/counters/_get-counters-java.mdx
@@ -0,0 +1,252 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+This operation is used to get counters' values for a specific document.
+It can be used to get the value of a single counter, multiple counters' values, or all counters' values.
+
+## Syntax
+
+#### Get Single Counter
+
+
+
+{`public GetCountersOperation(String docId, String counter)
+public GetCountersOperation(String docId, String counter, boolean returnFullResults)
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **docId** | String | The ID of the document that holds the counters |
+| **counter** | String | The name of the counter to get |
+| **returnFullResults** | boolean | A flag which indicates if the operation should include a dictionary of counter values per database node in the result |
+
+
+
+**Return Full Results flag**:
+
+If RavenDB is running in a distributed cluster, and the database resides on several nodes,
+a counter can have a different *local* value on each database node, and the total counter value is the
+sum of all the local values of this counter from each node.
+In order to get the counter values per database node, set the `returnFullResults` flag to `true`
+
+
+#### Get Multiple Counters
+
+
+
+{`public GetCountersOperation(String docId, String[] counters)
+public GetCountersOperation(String docId, String[] counters, boolean returnFullResults)
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **docId** | String | The ID of the document that holds the counters |
+| **counters** | String[] | The names of the counters to get |
+| **returnFullResults** | boolean | A flag which indicates if the operation should include a dictionary of counter values per database node in the result |
+#### Get All Counters of a Document
+
+
+
+{`public GetCountersOperation(String docId)
+public GetCountersOperation(String docId, boolean returnFullResults)
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **docId** | String | The ID of the document that holds the counters |
+| **returnFullResults** | boolean | A flag which indicates if the operation should include a dictionary of counter values per database node in the result |
+
+
+
+## Return Value
+
+The operation returns a `CountersDetail` object, which holds a list of `CounterDetail` objects
+
+
+
+{`public class CountersDetail \{
+
+ private List counters;
+
+ // getters and setters
+\}
+`}
+
+
+
+
+
+{`public class CounterDetail \{
+ private String documentId; // ID of the document that holds the counter
+ private String counterName; // The counter name
+ private long totalValue; // Total counter value
+ private long etag; // Counter Etag
+ private Map counterValues; // A map of counter values per database node
+
+ private String changeVector; // Change vector of the counter
+
+ // getters and setters
+\}
+`}
+
+
+
+
+
+## Examples
+
+Assume we have a `users/1` document that holds 3 counters:
+`likes`, `dislikes` and `downloads` - with values 10, 20 and 30 (respectively)
+
+### Example #1 : Get single counter
+
+
+
+{`CountersDetail operationResult = store.operations()
+ .send(new GetCountersOperation("users/1", "likes"));
+`}
+
+
+
+#### Result:
+
+
+
+{`\{
+ "Counters":
+ [
+ \{
+ "DocumentId" : "users/1",
+ "CounterName" : "likes",
+ "TotalValue" : 10,
+ "CounterValues" : null
+ \}
+ ]
+\}
+`}
+
+
+
+### Example #2 : Get multiple counters
+
+
+
+{`CountersDetail operationResult = store.operations()
+ .send(new GetCountersOperation("users/1", new String[]\{ "likes", "dislikes" \}));
+`}
+
+
+
+#### Result:
+
+
+
+{`\{
+ "Counters":
+ [
+ \{
+ "DocumentId" : "users/1",
+ "CounterName" : "likes",
+ "TotalValue" : 10,
+ "CounterValues" : null
+ \},
+ \{
+ "DocumentId" : "users/1",
+ "CounterName" : "dislikes",
+ "TotalValue" : 20,
+ "CounterValues" : null
+ \}
+ ]
+\}
+`}
+
+
+
+### Example #3 : Get all counters
+
+
+
+{`CountersDetail operationResult = store.operations()
+ .send(new GetCountersOperation("users/1"));
+`}
+
+
+
+#### Result:
+
+
+
+{`\{
+ "Counters":
+ [
+ \{
+ "DocumentId" : "users/1",
+ "CounterName" : "likes",
+ "TotalValue" : 10,
+ "CounterValues" : null
+ \},
+ \{
+ "DocumentId" : "users/1",
+ "CounterName" : "dislikes",
+ "TotalValue" : 20,
+ "CounterValues" : null
+ \},
+ \{
+ "DocumentId" : "users/1",
+ "CounterName" : "downloads",
+ "TotalValue" : 30,
+ "CounterValues" : null
+ \}
+ ]
+\}
+`}
+
+
+
+### Example #4 : Include full values in the result
+
+
+
+{`CountersDetail operationResult = store.operations()
+ .send(new GetCountersOperation("users/1", "likes", true));
+`}
+
+
+
+#### Result:
+
+Assuming a 3-node cluster, the distribution of the counter's value to nodes A, B, and C could be as follows:
+
+
+
+{`\{
+ "Counters":
+ [
+ \{
+ "DocumentId" : "users/1",
+ "CounterName" : "likes",
+ "TotalValue" : 10,
+ "CounterValues" :
+ \{
+ "A:35-UuCp420vs0u+URADcGVURA" : 5,
+ "B:83-SeCFU29daUOxfjUcAlLiJw" : 3,
+ "C:27-7i7GP8bOOkGYLNflO/rSeg" : 2,
+ \}
+ \}
+ ]
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/counters/_get-counters-nodejs.mdx b/versioned_docs/version-7.1/client-api/operations/counters/_get-counters-nodejs.mdx
new file mode 100644
index 0000000000..df12c024d6
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/counters/_get-counters-nodejs.mdx
@@ -0,0 +1,259 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+This operation is used to get counters' values for a specific document.
+It can be used to get the value of a single counter, multiple counters' values, or all counters' values.
+
+## Syntax
+
+
+
+{`// Get single counter
+const getCountersOp = new GetCountersOperation(docId, counter);
+const getCountersOp = new GetCountersOperation(docId, counter, returnFullResults = false);
+`}
+
+
+
+
+
+{`// Get multiple counters
+const getCountersOp = new GetCountersOperation(docId, counters);
+const getCountersOp = new GetCountersOperation(docId, counters, returnFullResults = false);
+`}
+
+
+
+
+
+{`// Get all counters of a document
+const getCountersOp = new GetCountersOperation(docId);
+`}
+
+
+
+| Parameter | Type | Description |
+|-----------------------|----------|-----------------------------------------------------------------------------------------------------------------------|
+| **docId** | string | The ID of the document that holds the counters |
+| **counter** | string | The name of the counter to get |
+| **counters** | string[] | The list of counter names to get |
+| **returnFullResults** | boolean | A flag which indicates if the operation should include a dictionary of counter values per database node in the result |
+
+
+
+**The full results flag:**
+
+If RavenDB is running in a distributed cluster, and the database resides on several nodes,
+then a counter can have a different *local* value on each database node.
+The total counter value is the sum of all the local values of this counter from each node.
+To get the counter values per database node, set the `returnFullResults` flag to `true`.
+
+
+
+
+
+## Return Value
+
+The operation returns a `CountersDetail` object, which holds a list of `CounterDetail` objects
+
+
+
+{`// The CounterDetails object:
+// ==========================
+\{
+ // A list of "CounterDetail" objects;
+ counters;
+\}
+`}
+
+
+
+
+
+{`// The CounterDetail object:
+// =========================
+\{
+ // ID of the document that holds the counter;
+ documentId; // string
+
+ // The counter name
+ counterName; //string
+
+ // Total counter value
+ totalValue; // number
+
+ // A dictionary of counter values per database node
+ counterValues?;
+
+ // Etag of counter
+ etag?; // number;
+
+ // Change vector of counter
+ changeVector?; // string
+\}
+`}
+
+
+
+
+
+## Examples
+
+Assume we have a `users/1` document that holds 3 counters:
+`Likes`, `Dislikes` and `Downloads` - with values 10, 20 and 30 (respectively)
+### Example #1 : Get single counter
+
+
+
+{`// Define the get counters operation
+const getCountersOp = new GetCountersOperation("users/1", "Likes");
+
+// Execute the operation by passing it to operations.send
+const result = await documentStore.operations.send(getCountersOp);
+const counters = result.counters;
+`}
+
+
+
+#### Result:
+
+
+
+{`\{
+ "counters":
+ [
+ \{
+ "documentId" : "users/1",
+ "counterName" : "Likes",
+ "totalValue" : 10,
+ "counterValues" : null
+ \}
+ ]
+\}
+`}
+
+
+
+### Example #2 : Get multiple counters
+
+
+
+{`const getCountersOp = new GetCountersOperation("users/1", ["Likes", "Dislikes"]);
+
+const result = await documentStore.operations.send(getCountersOp);
+const counters = result.counters;
+`}
+
+
+
+#### Result:
+
+
+
+{`\{
+ "counters":
+ [
+ \{
+ "documentId" : "users/1",
+ "counterName" : "Likes",
+ "totalValue" : 10,
+ "counterValues" : null
+ \},
+ \{
+ "documentId" : "users/1",
+ "counterName" : "Dislikes",
+ "totalValue" : 20,
+ "counterValues" : null
+ \}
+ ]
+\}
+`}
+
+
+
+### Example #3 : Get all counters
+
+
+
+{`const getCountersOp = new GetCountersOperation("users/1");
+
+const result = await documentStore.operations.send(getCountersOp);
+const counters = result.counters;
+`}
+
+
+
+#### Result:
+
+
+
+{`\{
+ "counters":
+ [
+ \{
+ "documentId" : "users/1",
+ "counterName" : "Likes",
+ "totalValue" : 10,
+ "counterValues" : null
+ \},
+ \{
+ "documentId" : "users/1",
+ "counterName" : "Dislikes",
+ "totalValue" : 20,
+ "counterValues" : null
+ \},
+ \{
+ "documentId" : "users/1",
+ "counterName" : "Downloads",
+ "totalValue" : 30,
+ "counterValues" : null
+ \}
+ ]
+\}
+`}
+
+
+
+### Example #4 : Include full values in the result
+
+
+
+{`const getCountersOp = new GetCountersOperation("users/1", "Likes", true);
+
+const result = await documentStore.operations.send(getCountersOp);
+const counters = result.counters;
+`}
+
+
+
+#### Result:
+
+Assuming a 3-node cluster, the distribution of the counter's value to nodes A, B, and C could be as follows:
+
+
+
+{`\{
+ "counters":
+ [
+ \{
+ "documentId" : "users/1",
+ "counterName" : "Likes",
+ "totalValue" : 10,
+ "counterValues" :
+ \{
+ "A:35-UuCp420vs0u+URADcGVURA" : 5,
+ "B:83-SeCFU29daUOxfjUcAlLiJw" : 3,
+ "C:27-7i7GP8bOOkGYLNflO/rSeg" : 2,
+ \}
+ \}
+ ]
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/counters/_get-counters-php.mdx b/versioned_docs/version-7.1/client-api/operations/counters/_get-counters-php.mdx
new file mode 100644
index 0000000000..3f7b215106
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/counters/_get-counters-php.mdx
@@ -0,0 +1,294 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+This operation is used to get counters' values for a specific document.
+It can be used to get the value of a single counter, multiple counters' values, or all counters' values.
+
+## Syntax
+
+#### `GetCountersOperation`
+
+Use `GetCountersOperation` to get counters.
+Find usage examples below for getting a single counter, multiple counters, or all document counters.
+
+
+{`class GetCountersOperation \{
+ public function __construct(?string $docId, string|StringArray|array|null $counters =
+ null, bool $returnFullResults = false) \{ ... \}
+\}
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **docId** | string | The ID of the document that holds the counters |
+| **counter** | `string` or `StringArray` or `array` or `null` | The name or names array of the counter/s to get, or `null` for all document counters |
+| **returnFullResults** | bool | A flag which indicates if the operation should include a dictionary of counter values per database node in the result |
+#### Get Single Counter
+
+
+
+{`$docId = "users/1";
+$counter = "likes";
+$returnFullResults = false;
+
+$operation = new GetCountersOperation($docId, $counter, $returnFullResults);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **docId** | string | The ID of the document that holds the counters |
+| **counter** | string | The name of the counter to get |
+| **returnFullResults** | bool | A flag which indicates if the operation should include a dictionary of counter values per database node in the result |
+
+
+
+**Return Full Results flag:**
+
+If RavenDB is running in a distributed cluster, and the database resides on several nodes,
+a counter can have a different *local* value on each database node, and the total counter value is the
+sum of all the local values of this counter from each node.
+In order to get the counter values per database node, set the `returnFullResults` flag to `true`
+
+
+#### Get Multiple Counters
+
+
+
+{`$docId = "users/1";
+$counters = [ "likes", "score"];
+$returnFullResults = false;
+
+$operation = new GetCountersOperation($docId, $counters, $returnFullResults);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **docId** | string | The ID of the document that holds the counters |
+| **counters** | `StringArray` or `array` | The names of the counters to get |
+| **returnFullResults** | bool | A flag which indicates if the operation should include a dictionary of counter values per database node in the result |
+#### Get All Counters of a Document
+
+
+
+{`$docId = "users/1";
+$returnFullResults = false;
+
+$operation = new GetCountersOperation($docId, null, $returnFullResults);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **docId** | string | The ID of the document that holds the counters |
+| **returnFullResults** | bool | A flag which indicates if the operation should include a dictionary of counter values per database node in the result |
+
+
+
+## Return Value
+
+The operation returns a `CountersDetail` object, which holds a list of `CounterDetail` objects
+
+
+
+{`class CountersDetail
+\{
+ private ?CounterDetailList $counters = null;
+\}
+`}
+
+
+
+
+
+{`class CounterDetail
+\{
+ private ?string $documentId = null; // ID of the document that holds the counter
+ private ?string $counterName = null; // The counter name
+ private ?int $totalValue = null; // Total counter value
+ private ?int $etag = null; // Counter Etag
+ private ?array $counterValues = []; // A dictionary of counter values per database node
+
+ private ?string $changeVector = null; // Change vector of the counter
+
+ // ... getters and setters
+\}
+
+class CounterDetailList extends TypedList
+\{
+ public function __construct()
+ \{
+ parent::__construct(CounterDetail::class);
+ $this->setNullAllowed(true);
+ \}
+\}
+`}
+
+
+
+
+
+## Examples
+
+Assume we have a `users/1` document that holds 3 counters:
+`likes`, `dislikes` and `downloads` - with values 10, 20 and 30 (respectively)
+
+### Example #1 : Get single counter
+
+
+
+{`/** @var CountersDetail $operationResult */
+$operationResult = $store
+ ->operations()
+ ->send(new GetCountersOperation("users/1", "likes"));
+`}
+
+
+
+#### Result:
+
+
+
+{`\{
+ "Counters":
+ [
+ \{
+ "DocumentId" : "users/1",
+ "CounterName" : "likes",
+ "TotalValue" : 10,
+ "CounterValues" : null
+ \}
+ ]
+\}
+`}
+
+
+
+### Example #2 : Get multiple counters
+
+
+
+{`/** @var CountersDetail $operationResult */
+$operationResult = $store
+ ->operations()
+ ->send(new GetCountersOperation("users/1", [ "likes", "dislikes" ]));
+`}
+
+
+
+#### Result:
+
+
+
+{`\{
+ "Counters":
+ [
+ \{
+ "DocumentId" : "users/1",
+ "CounterName" : "likes",
+ "TotalValue" : 10,
+ "CounterValues" : null
+ \},
+ \{
+ "DocumentId" : "users/1",
+ "CounterName" : "dislikes",
+ "TotalValue" : 20,
+ "CounterValues" : null
+ \}
+ ]
+\}
+`}
+
+
+
+### Example #3 : Get all counters
+
+
+
+{`/** @var CountersDetail $operationResult */
+$operationResult = $store->operations()
+ ->send(new GetCountersOperation("users/1"));
+`}
+
+
+
+#### Result:
+
+
+
+{`\{
+ "Counters":
+ [
+ \{
+ "DocumentId" : "users/1",
+ "CounterName" : "likes",
+ "TotalValue" : 10,
+ "CounterValues" : null
+ \},
+ \{
+ "DocumentId" : "users/1",
+ "CounterName" : "dislikes",
+ "TotalValue" : 20,
+ "CounterValues" : null
+ \},
+ \{
+ "DocumentId" : "users/1",
+ "CounterName" : "downloads",
+ "TotalValue" : 30,
+ "CounterValues" : null
+ \}
+ ]
+\}
+`}
+
+
+
+### Example #4 : Include full values in the result
+
+
+
+{`/** @var CountersDetail $operationResult */
+$operationResult = $store
+ ->operations()
+ ->send(new GetCountersOperation("users/1", "likes", true));
+`}
+
+
+
+#### Result:
+
+Assuming a 3-node cluster, the distribution of the counter's value to nodes A, B, and C could be as follows:
+
+
+
+{`\{
+ "Counters":
+ [
+ \{
+ "DocumentId" : "users/1",
+ "CounterName" : "likes",
+ "TotalValue" : 10,
+ "CounterValues" :
+ \{
+ "A:35-UuCp420vs0u+URADcGVURA" : 5,
+ "B:83-SeCFU29daUOxfjUcAlLiJw" : 3,
+ "C:27-7i7GP8bOOkGYLNflO/rSeg" : 2,
+ \}
+ \}
+ ]
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/counters/counter-batch.mdx b/versioned_docs/version-7.1/client-api/operations/counters/counter-batch.mdx
new file mode 100644
index 0000000000..a235199acf
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/counters/counter-batch.mdx
@@ -0,0 +1,43 @@
+---
+title: "Counters Batch Operation"
+hide_table_of_contents: true
+sidebar_label: Counters Batch
+sidebar_position: 1
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import CounterBatchCsharp from './_counter-batch-csharp.mdx';
+import CounterBatchJava from './_counter-batch-java.mdx';
+import CounterBatchPhp from './_counter-batch-php.mdx';
+import CounterBatchNodejs from './_counter-batch-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "php", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/counters/get-counters.mdx b/versioned_docs/version-7.1/client-api/operations/counters/get-counters.mdx
new file mode 100644
index 0000000000..fa9914a891
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/counters/get-counters.mdx
@@ -0,0 +1,43 @@
+---
+title: "Get Counters Operation"
+hide_table_of_contents: true
+sidebar_label: Get Counters
+sidebar_position: 0
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import GetCountersCsharp from './_get-counters-csharp.mdx';
+import GetCountersJava from './_get-counters-java.mdx';
+import GetCountersPhp from './_get-counters-php.mdx';
+import GetCountersNodejs from './_get-counters-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "php", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/how-to/_category_.json b/versioned_docs/version-7.1/client-api/operations/how-to/_category_.json
new file mode 100644
index 0000000000..61a11ebe76
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/how-to/_category_.json
@@ -0,0 +1,4 @@
+{
+ "position": 1,
+ "label": How to...,
+}
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/how-to/_switch-operations-to-a-different-database-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/how-to/_switch-operations-to-a-different-database-csharp.mdx
new file mode 100644
index 0000000000..3baf4d6ed1
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/how-to/_switch-operations-to-a-different-database-csharp.mdx
@@ -0,0 +1,113 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* By default, all operations work on the default database defined in the [Document Store](../../../client-api/creating-document-store.mdx).
+
+* **To operate on a different database**, use the `ForDatabase` method.
+ If the requested database doesn't exist on the server, an exception will be thrown.
+
+* In this page:
+ * [Common operation: `Operations.ForDatabase`](../../../client-api/operations/how-to/switch-operations-to-a-different-database.mdx#common-operation:-operationsfordatabase)
+ * [Maintenance operation: `Maintenance.ForDatabase`](../../../client-api/operations/how-to/switch-operations-to-a-different-database.mdx#maintenance-operation:-maintenancefordatabase)
+
+## Common operation: `Operations.ForDatabase`
+
+* For reference, all common operations are listed [here](../../../client-api/operations/what-are-operations.mdx#common-operations).
+
+
+
+{`// Define default database on the store
+var documentStore = new DocumentStore
+\{
+ Urls = new[] \{ "yourServerURL" \},
+ Database = "DefaultDB"
+\}.Initialize();
+
+using (documentStore)
+\{
+ // Use 'ForDatabase', get operation executor for another database
+ OperationExecutor opExecutor = documentStore.Operations.ForDatabase("AnotherDB");
+
+ // Send the operation, e.g. 'GetRevisionsOperation' will be executed on "AnotherDB"
+ var revisionsInAnotherDB =
+ opExecutor.Send(new GetRevisionsOperation("Orders/1-A"));
+
+ // Without 'ForDatabase', the operation is executed "DefaultDB"
+ var revisionsInDefaultDB =
+ documentStore.Operations.Send(new GetRevisionsOperation("Company/1-A"));
+\}
+`}
+
+
+**Syntax**:
+
+
+
+{`OperationExecutor ForDatabase(string databaseName);
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **databaseName** | `string` | Name of the database to operate on |
+
+| Return Value | Description |
+| - | - |
+| `OperationExecutor` | New instance of Operation Executor that is scoped to the requested database |
+
+
+
+## Maintenance operation: `Maintenance.ForDatabase`
+
+* For reference, all maintenance operations are listed [here](../../../client-api/operations/what-are-operations.mdx#maintenance-operations).
+
+
+
+{`// Define default database on the store
+var documentStore = new DocumentStore
+\{
+ Urls = new[] \{ "yourServerURL" \},
+ Database = "DefaultDB"
+\}.Initialize();
+
+using (documentStore = new DocumentStore())
+\{
+ // Use 'ForDatabase', get maintenance operation executor for another database
+ MaintenanceOperationExecutor opExecutor = documentStore.Maintenance.ForDatabase("AnotherDB");
+
+ // Send the maintenance operation, e.g. get database stats for "AnotherDB"
+ var statsForAnotherDB =
+ opExecutor.Send(new GetStatisticsOperation());
+
+ // Without 'ForDatabase', the stats are retrieved for "DefaultDB"
+ var statsForDefaultDB =
+ documentStore.Maintenance.Send(new GetStatisticsOperation());
+\}
+`}
+
+
+**Syntax**:
+
+
+
+{`MaintenanceOperationExecutor ForDatabase(string databaseName);
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **databaseName** | `string` | Name of the database to operate on |
+
+| Return Value | Description |
+| - | - |
+| `MaintenanceOperationExecutor` | New instance of Maintenance Operation Executor that is scoped to the requested database |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/how-to/_switch-operations-to-a-different-database-java.mdx b/versioned_docs/version-7.1/client-api/operations/how-to/_switch-operations-to-a-different-database-java.mdx
new file mode 100644
index 0000000000..41693de669
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/how-to/_switch-operations-to-a-different-database-java.mdx
@@ -0,0 +1,68 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+By default, the operations available directly in store are working on a default database that was setup for that store. To switch operations to a different database that is available on that server use the **forDatabase** method.
+
+## Operations.forDatabase
+
+
+
+{`OperationExecutor forDatabase(String databaseName);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **databaseName** | String | Name of a database for which you want to get new Operations |
+
+| Return Value | |
+| ------------- | ----- |
+| OperationExecutor | New instance of Operations that is scoped to the requested database |
+
+### Example
+
+
+
+{`OperationExecutor operations = documentStore.operations().forDatabase("otherDatabase");
+`}
+
+
+
+
+
+## How to Switch Maintenance Operations to a Different Database
+
+As with `operations`, by default the `maintenance` operations available directly in store are working on a default database that was setup for that store. To switch maintenance operations to a different database use the **forDatabase** method.
+
+## Maintenance.forDatabase
+
+
+
+{`MaintenanceOperationExecutor forDatabase(String databaseName);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **databaseName** | String | Name of a database for which you want to get new maintenance operations |
+
+| Return Value | |
+| ------------- | ----- |
+| MaintenanceOperationExecutor | New instance of maintenance that is scoped to the requested database |
+
+### Example
+
+
+
+{`MaintenanceOperationExecutor maintenanceOperations = documentStore.maintenance().forDatabase("otherDatabase");
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/how-to/_switch-operations-to-a-different-database-nodejs.mdx b/versioned_docs/version-7.1/client-api/operations/how-to/_switch-operations-to-a-different-database-nodejs.mdx
new file mode 100644
index 0000000000..873d48e936
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/how-to/_switch-operations-to-a-different-database-nodejs.mdx
@@ -0,0 +1,101 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* By default, all operations work on the default database defined in the [document store](../../../client-api/creating-document-store.mdx).
+
+* **To operate on a different database**, use the `forDatabase` method.
+ If the requested database doesn't exist on the server, an exception will be thrown.
+
+* In this page:
+ * [Common operation: `operations.forDatabase`](../../../client-api/operations/how-to/switch-operations-to-a-different-database.mdx#common-operation:-operationsfordatabase)
+ * [Maintenance operation: `maintenance.forDatabase`](../../../client-api/operations/how-to/switch-operations-to-a-different-database.mdx#maintenance-operation:-maintenancefordatabase)
+
+## Common operation: `operations.forDatabase`
+
+* For reference, all common operations are listed [here](../../../client-api/operations/what-are-operations.mdx#common-operations).
+
+
+
+{`// Define default database on the store
+const documentStore = new DocumentStore("yourServerURL", "DefaultDB");
+documentStore.initialize();
+
+// Use 'forDatabase', get operation executor for another database
+ const opExecutor = documentStore.operations.forDatabase("AnotherDB");
+
+// Send the operation, e.g. 'GetRevisionsOperation' will be executed on "AnotherDB"
+const revisionsInAnotherDB =
+ await opExecutor.send(new GetRevisionsOperation("Orders/1-A"));
+
+// Without 'forDatabase', the operation is executed "DefaultDB"
+const revisionsInDefaultDB =
+ await documentStore.operations.send(new GetRevisionsOperation("Company/1-A"));
+`}
+
+
+**Syntax**:
+
+
+
+{`store.operations.forDatabase(databaseName);
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **databaseName** | `string` | Name of the database to operate on |
+
+| Return Value | Description |
+| - | - |
+| `OperationExecutor` | New instance of Operation Executor that is scoped to the requested database |
+
+
+
+## Maintenance operation: `maintenance.forDatabase`
+
+* For reference, all maintenance operations are listed [here](../../../client-api/operations/what-are-operations.mdx#maintenance-operations).
+
+
+
+{`// Define default database on the store
+const documentStore = new DocumentStore("yourServerURL", "DefaultDB");
+documentStore.initialize();
+
+// Use 'forDatabase', get maintenance operation executor for another database
+const opExecutor = documentStore.maintenance.forDatabase("AnotherDB");
+
+// Send the maintenance operation, e.g. get database stats for "AnotherDB"
+const statsForAnotherDB =
+ await opExecutor.send(new GetStatisticsOperation());
+
+// Without 'forDatabase', the stats are retrieved for "DefaultDB"
+const statsForDefaultDB =
+ await documentStore.maintenance.send(new GetStatisticsOperation());
+`}
+
+
+**Syntax**:
+
+
+
+{`store.maintenance.forDatabase(databaseName);
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **databaseName** | `string` | Name of the database to operate on |
+
+| Return Value | Description |
+| - | - |
+| `MaintenanceOperationExecutor` | New instance of Maintenance Operation Executor that is scoped to the requested database |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/how-to/_switch-operations-to-a-different-database-php.mdx b/versioned_docs/version-7.1/client-api/operations/how-to/_switch-operations-to-a-different-database-php.mdx
new file mode 100644
index 0000000000..69e7a47830
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/how-to/_switch-operations-to-a-different-database-php.mdx
@@ -0,0 +1,113 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* By default, all operations work on the default database defined in the [Document Store](../../../client-api/creating-document-store.mdx).
+
+* **To operate on a different database**, use the `for_database` method.
+ If the requested database doesn't exist on the server, an exception will be thrown.
+
+* In this page:
+ * [Common operation: `forDatabase`](../../../client-api/operations/how-to/switch-operations-to-a-different-database.mdx#common-operation:-fordatabase)
+ * [Maintenance operation: `maintenance.for_database`](../../../client-api/operations/how-to/switch-operations-to-a-different-database.mdx#maintenance-operation:-maintenancefor_database)
+
+## Common operation: `forDatabase`
+
+* For reference, all common operations are listed [here](../../../client-api/operations/what-are-operations.mdx#common-operations).
+
+
+
+{`// Define default database on the store
+$documentStore = new DocumentStore(
+ ["yourServerURL"],
+ "DefaultDB"
+);
+$documentStore->initialize();
+
+try \{
+ // Use 'ForDatabase', get operation executor for another database
+ /** @var OperationExecutor $opExecutor */
+ $opExecutor = $documentStore->operations()->forDatabase("AnotherDB");
+
+ // Send the operation, e.g. 'GetRevisionsOperation' will be executed on "AnotherDB"
+ $revisionsInAnotherDB = $opExecutor->send(new GetRevisionsOperation(Order::class, "Orders/1-A"));
+
+ // Without 'ForDatabase', the operation is executed "DefaultDB"
+ $revisionsInDefaultDB = $documentStore->operations()->send(new GetRevisionsOperation(Company::class, "Company/1-A"));
+\} finally \{
+ $documentStore->close();
+\}
+`}
+
+
+**Syntax**:
+
+
+
+{`public function forDatabase(?string $databaseName): OperationExecutor;
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **$databaseName** | `?string` | Name of the database to operate on |
+
+| Return Value | Description |
+| - | - |
+| `OperationExecutor` | New instance of Operation Executor that is scoped to the requested database |
+
+
+
+## Maintenance operation: `maintenance.for_database`
+
+* For reference, all maintenance operations are listed [here](../../../client-api/operations/what-are-operations.mdx#maintenance-operations).
+
+
+
+{`// Define default database on the store
+$documentStore = new DocumentStore(
+ [ "yourServerURL" ],
+ "DefaultDB"
+);
+$documentStore->initialize();
+
+try \{
+ // Use 'ForDatabase', get maintenance operation executor for another database
+ /** @var MaintenanceOperationExecutor $opExecutor */
+ $opExecutor = $documentStore->maintenance()->forDatabase("AnotherDB");
+
+ // Send the maintenance operation, e.g. get database stats for "AnotherDB"
+ $statsForAnotherDB = $opExecutor->send(new GetStatisticsOperation());
+
+ // Without 'ForDatabase', the stats are retrieved for "DefaultDB"
+ $statsForDefaultDB = $documentStore->maintenance()->send(new GetStatisticsOperation());
+\} finally \{
+ $documentStore->close();
+\}
+`}
+
+
+**Syntax**:
+
+
+
+{`public function forDatabase(?string $databaseName): MaintenanceOperationExecutor;
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **$databaseName** | `?string` | Name of the database to operate on |
+
+| Return Value | Description |
+| - | - |
+| `MaintenanceOperationExecutor` | New instance of Maintenance Operation Executor that is scoped to the requested database |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/how-to/_switch-operations-to-a-different-database-python.mdx b/versioned_docs/version-7.1/client-api/operations/how-to/_switch-operations-to-a-different-database-python.mdx
new file mode 100644
index 0000000000..e511d26de1
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/how-to/_switch-operations-to-a-different-database-python.mdx
@@ -0,0 +1,97 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* By default, all operations work on the default database defined in the [Document Store](../../../client-api/creating-document-store.mdx).
+
+* **To operate on a different database**, use the `for_database` method.
+ If the requested database doesn't exist on the server, an exception will be thrown.
+
+* In this page:
+ * [Common operation: `operations.for_database`](../../../client-api/operations/how-to/switch-operations-to-a-different-database.mdx#common-operation:-operationsfor_database)
+ * [Maintenance operation: `maintenance.for_database`](../../../client-api/operations/how-to/switch-operations-to-a-different-database.mdx#maintenance-operation:-maintenancefor_database)
+
+## Common operation: `operations.for_database`
+
+* For reference, all common operations are listed [here](../../../client-api/operations/what-are-operations.mdx#common-operations).
+
+
+
+{`# Define default database on the store
+document_store = DocumentStore(urls=["yourServerURL"], database="DefaultDB")
+document_store.initialize()
+
+with document_store:
+ # Use 'for_database', get operation executor for another database
+ op_executor = document_store.operations.for_database("AnotherDB")
+
+ # Send the operation, e.g. 'GetRevisionsOperation' will be executed on "AnotherDB"
+ revisions_in_another_db = op_executor.send(GetRevisionsOperation("Orders/1-A", Order))
+
+ # Without 'for_database', the operation is executed "DefaultDB"
+ revisions_in_default_db = document_store.operations.send(GetRevisionsOperation("Company/1-A", Company))
+`}
+
+
+**Syntax**:
+
+
+
+{`def for_database(self, database_name: str) -> OperationExecutor: ...
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **database_name** | `str` | Name of the database to operate on |
+
+| Return Value | Description |
+| - | - |
+| `OperationExecutor` | New instance of Operation Executor that is scoped to the requested database |
+
+
+
+## Maintenance operation: `maintenance.for_database`
+
+* For reference, all maintenance operations are listed [here](../../../client-api/operations/what-are-operations.mdx#maintenance-operations).
+
+
+
+{`# Define default database on the store
+document_store = DocumentStore(urls=["yourServerURL"], database="DefaultDB")
+document_store.initialize()
+
+with DocumentStore() as document_store:
+ # Use 'for_database', get maintenance operation executor for another database
+ op_executor = document_store.maintenance.for_database("AnotherDB")
+ # Send the maintenance operation, e.g. get database stats for "AnotherDB"
+ stats_for_another_db = op_executor.send(GetStatisticsOperation())
+ # Without 'for_database', the stats are retrieved for "DefaultDB"
+ stats_for_default_db = document_store.maintenance.send(GetStatisticsOperation())
+`}
+
+
+**Syntax**:
+
+
+
+{`def for_database(self, database_name: str) -> MaintenanceOperationExecutor: ...
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **database_name** | `str` | Name of the database to operate on |
+
+| Return Value | Description |
+| - | - |
+| `MaintenanceOperationExecutor` | New instance of Maintenance Operation Executor that is scoped to the requested database |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/how-to/_switch-operations-to-a-different-node-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/how-to/_switch-operations-to-a-different-node-csharp.mdx
new file mode 100644
index 0000000000..53d9460da2
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/how-to/_switch-operations-to-a-different-node-csharp.mdx
@@ -0,0 +1,69 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* By default, when working with multiple nodes,
+ all client requests will access the server node that is defined by the client configuration.
+ (Learn more in: [Load balancing client requests](../../../client-api/configuration/load-balance/overview.mdx)).
+
+* However, **server maintenance operations** can be executed on a specific node by using the `ForNode` method.
+ (An exception is thrown if that node is not available).
+
+* In this page:
+ * [Server-maintenance operations - ForNode](../../../client-api/operations/how-to/switch-operations-to-a-different-node.mdx#server-maintenance-operations---fornode)
+
+## Server maintenance operations - ForNode
+
+* For reference, all server maintenance operations are listed [here](../../../client-api/operations/what-are-operations.mdx#server-maintenance-operations).
+
+
+
+{`// Default node access can be defined on the store
+var documentStore = new DocumentStore
+\{
+ Urls = new[] \{ "ServerURL_1", "ServerURL_2", "..." \},
+ Database = "DefaultDB",
+ Conventions = new DocumentConventions
+ \{
+ // For example:
+ // With ReadBalanceBehavior set to: 'FastestNode':
+ // Client READ requests will address the fastest node
+ // Client WRITE requests will address the preferred node
+ ReadBalanceBehavior = ReadBalanceBehavior.FastestNode
+ \}
+\}.Initialize();
+
+using (documentStore)
+\{
+ // Use 'ForNode' to override the default node configuration
+ // The Maintenance.Server operation will be executed on the specified node
+ var dbNames = documentStore.Maintenance.Server.ForNode("C")
+ .Send(new GetDatabaseNamesOperation(0, 25));
+\}
+`}
+
+
+
+**Syntax**:
+
+
+
+{`ServerOperationExecutor ForNode(string nodeTag);
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **nodeTag** | string | The tag of the node to operate on |
+
+| Return Value | |
+| - | - |
+| `ServerOperationExecutor` | New instance of Server Operation Executor that is scoped to the requested node |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/how-to/_switch-operations-to-a-different-node-nodejs.mdx b/versioned_docs/version-7.1/client-api/operations/how-to/_switch-operations-to-a-different-node-nodejs.mdx
new file mode 100644
index 0000000000..6289c58606
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/how-to/_switch-operations-to-a-different-node-nodejs.mdx
@@ -0,0 +1,63 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* By default, when working with multiple nodes,
+ all client requests will access the server node that is defined by the client configuration.
+ (Learn more in: [Load balancing client requests](../../../client-api/configuration/load-balance/overview.mdx)).
+
+* However, **server maintenance operations** can be executed on a specific node by using the `forNode` method.
+ (An exception is thrown if that node is not available).
+
+* In this page:
+ * [Server maintenance operations - forNode](../../../client-api/operations/how-to/switch-operations-to-a-different-node.mdx#server-maintenance-operations---fornode)
+
+## Server maintenance operations - forNode
+
+* For reference, all server maintenance operations are listed [here](../../../client-api/operations/what-are-operations.mdx#server-maintenance-operations).
+
+
+
+{`// Default node access can be defined on the store
+const documentStore = new DocumentStore(["serverUrl_1", "serverUrl_2", "..."], "DefaultDB");
+
+// For example:
+// With readBalanceBehavior set to: 'FastestNode':
+// Client READ requests will address the fastest node
+// Client WRITE requests will address the preferred node
+documentStore.conventions.readBalanceBehavior = "FastestNode";
+documentStore.initialize();
+
+// Use 'forNode' to override the default node configuration
+// Get a server operation executor for a specific node
+const serverOpExecutor = await documentStore.maintenance.server.forNode("C");
+
+// The maintenance.server operation will be executed on the specified node 'C'
+const dbNames = await serverOpExecutor.send(new GetDatabaseNamesOperation(0, 25));
+`}
+
+
+
+**Syntax**:
+
+
+
+{`await store.maintenance.server.forNode(nodeTag);
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **nodeTag** | string | The tag of the node to operate on |
+
+| Return Value | |
+| - | - |
+| `Promise` | A promise that returns a new instance of Server Operation Executor scoped to the requested node |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/how-to/_switch-operations-to-a-different-node-php.mdx b/versioned_docs/version-7.1/client-api/operations/how-to/_switch-operations-to-a-different-node-php.mdx
new file mode 100644
index 0000000000..a15c6d6dac
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/how-to/_switch-operations-to-a-different-node-php.mdx
@@ -0,0 +1,72 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* By default, when working with multiple nodes,
+ all client requests will access the server node that is defined by the client configuration.
+ (Learn more in: [Load balancing client requests](../../../client-api/configuration/load-balance/overview.mdx)).
+
+* However, **server maintenance operations** can be executed on a specific node by using the `forNode` method.
+ (An exception is thrown if that node is not available).
+
+* In this page:
+ * [Server-maintenance operations - ForNode](../../../client-api/operations/how-to/switch-operations-to-a-different-node.mdx#server-maintenance-operations---fornode)
+
+## Server maintenance operations - ForNode
+
+* For reference, all server maintenance operations are listed [here](../../../client-api/operations/what-are-operations.mdx#server-maintenance-operations).
+
+
+
+{`// Default node access can be defined on the store
+$documentStore = new DocumentStore(
+ ["ServerURL_1", "ServerURL_2", "..."],
+ "DefaultDB"
+);
+
+$conventions = new DocumentConventions();
+
+// For example:
+// With ReadBalanceBehavior set to: 'FastestNode':
+// Client READ requests will address the fastest node
+// Client WRITE requests will address the preferred node
+$conventions->setReadBalanceBehavior(ReadBalanceBehavior::fastestNode());
+$documentStore->setConventions($conventions);
+
+$documentStore->initialize();
+
+try \{
+ // Use 'ForNode' to override the default node configuration
+ // The Maintenance.Server operation will be executed on the specified node
+ $dbNames = $documentStore->maintenance()->server()->forNode("C")
+ ->send(new GetDatabaseNamesOperation(0, 25));
+\} finally \{
+ $documentStore->close();
+\}
+`}
+
+
+
+**Syntax**:
+
+
+
+{`public function forNode(string $nodeTag): ServerOperationExecutor
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **$nodeTag** | `string` | The tag of the node to operate on |
+
+| Return Value | |
+| - | - |
+| `ServerOperationExecutor` | New instance of Server Operation Executor that is scoped to the requested node |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/how-to/switch-operations-to-a-different-database.mdx b/versioned_docs/version-7.1/client-api/operations/how-to/switch-operations-to-a-different-database.mdx
new file mode 100644
index 0000000000..ba863f86ad
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/how-to/switch-operations-to-a-different-database.mdx
@@ -0,0 +1,47 @@
+---
+title: "Switch Operations to a Different Database"
+hide_table_of_contents: true
+sidebar_label: Switch operations to different database
+sidebar_position: 0
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import SwitchOperationsToADifferentDatabaseCsharp from './_switch-operations-to-a-different-database-csharp.mdx';
+import SwitchOperationsToADifferentDatabaseJava from './_switch-operations-to-a-different-database-java.mdx';
+import SwitchOperationsToADifferentDatabasePython from './_switch-operations-to-a-different-database-python.mdx';
+import SwitchOperationsToADifferentDatabasePhp from './_switch-operations-to-a-different-database-php.mdx';
+import SwitchOperationsToADifferentDatabaseNodejs from './_switch-operations-to-a-different-database-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "python", "php", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/how-to/switch-operations-to-a-different-node.mdx b/versioned_docs/version-7.1/client-api/operations/how-to/switch-operations-to-a-different-node.mdx
new file mode 100644
index 0000000000..b9ff34e726
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/how-to/switch-operations-to-a-different-node.mdx
@@ -0,0 +1,37 @@
+---
+title: "Switch Operations to a Different Node"
+hide_table_of_contents: true
+sidebar_label: Switch operations to different node
+sidebar_position: 1
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import SwitchOperationsToADifferentNodeCsharp from './_switch-operations-to-a-different-node-csharp.mdx';
+import SwitchOperationsToADifferentNodePhp from './_switch-operations-to-a-different-node-php.mdx';
+import SwitchOperationsToADifferentNodeNodejs from './_switch-operations-to-a-different-node-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "php", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/_category_.json b/versioned_docs/version-7.1/client-api/operations/maintenance/_category_.json
new file mode 100644
index 0000000000..3df66cfce3
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/_category_.json
@@ -0,0 +1,4 @@
+{
+ "position": 3,
+ "label": Maintenance Operations,
+}
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/_get-stats-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/_get-stats-csharp.mdx
new file mode 100644
index 0000000000..9c5e83cea3
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/_get-stats-csharp.mdx
@@ -0,0 +1,193 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Statistics can be retrieved for the database and for collections.
+
+* By default, statistics are retrieved for the database defined in the Document Store.
+ To get database and collection statistics for another database use [ForDatabase](../../../client-api/operations/maintenance/get-stats.mdx#get-stats-for-another-database).
+
+* In this page:
+ * [Get collection statistics](../../../client-api/operations/maintenance/get-stats.mdx#get-collection-statistics)
+ * [Get detailed collection statistics](../../../client-api/operations/maintenance/get-stats.mdx#get-detailed-collection-statistics)
+ * [Get database statistics](../../../client-api/operations/maintenance/get-stats.mdx#get-database-statistics)
+ * [Get detailed database statistics](../../../client-api/operations/maintenance/get-stats.mdx#get-detailed-database-statistics)
+ * [Get statistics for another database](../../../client-api/operations/maintenance/get-stats.mdx#get-statistics-for-another-database)
+
+## Get collection statistics
+
+To get **collection statistics**, use `GetCollectionStatisticsOperation`:
+
+
+{`// Pass an instance of class \`GetCollectionStatisticsOperation\` to the store
+CollectionStatistics stats =
+ store.Maintenance.Send(new GetCollectionStatisticsOperation());
+`}
+
+
+Statistics are returned in the `CollectionStatistics` object.
+
+
+{`// Collection stats results:
+public class CollectionStatistics
+\{
+ // Total # of documents in all collections
+ public long CountOfDocuments \{ get; set; \}
+ // Total # of conflicts
+ public long CountOfConflicts \{ get; set; \}
+ // Total # of documents per collection
+ public Dictionary Collections \{ get; set; \}
+\}
+`}
+
+
+
+
+
+## Get detailed collection statistics
+
+To get **detailed collection statistics**, use `GetDetailedCollectionStatisticsOperation`:
+
+
+{`// Pass an instance of class \`GetDetailedCollectionStatisticsOperation\` to the store
+DetailedCollectionStatistics stats =
+ store.Maintenance.Send(new GetDetailedCollectionStatisticsOperation());
+`}
+
+
+Statistics are returned in the `DetailedCollectionStatistics` object.
+
+
+{`// Detailed collection stats results:
+public class DetailedCollectionStatistics
+\{
+ // Total # of documents in all collections
+ public long CountOfDocuments \{ get; set; \}
+ // Total # of conflicts
+ public long CountOfConflicts \{ get; set; \}
+ // Collection details per collection
+ public Dictionary Collections \{ get; set; \}
+\}
+
+// Details per collection
+public class CollectionDetails
+\{
+ public string Name \{ get; set; \}
+ public long CountOfDocuments \{ get; set; \}
+ public Size Size \{ get; set; \}
+ public Size DocumentsSize \{ get; set; \}
+ public Size TombstonesSize \{ get; set; \}
+ public Size RevisionsSize \{ get; set; \}
+\}
+`}
+
+
+
+
+
+## Get database statistics
+
+To get **database statistics**, use `GetStatisticsOperation`:
+
+
+{`// Pass an instance of class \`GetStatisticsOperation\` to the store
+DatabaseStatistics stats =
+ store.Maintenance.Send(new GetStatisticsOperation());
+`}
+
+
+Statistics are returned in the `DatabaseStatistics` object.
+
+
+{`// Database stats results:
+public class DatabaseStatistics
+\{
+ public long? LastDocEtag \{ get; set; \} // Last document etag in database
+ public long? LastDatabaseEtag \{ get; set; \} // Last database etag
+
+ public int CountOfIndexes \{ get; set; \} // Total # of indexes in database
+ public long CountOfDocuments \{ get; set; \} // Total # of documents in database
+ public long CountOfRevisionDocuments \{ get; set; \} // Total # of revision documents in database
+ public long CountOfDocumentsConflicts \{ get; set; \} // Total # of documents conflicts in database
+ public long CountOfTombstones \{ get; set; \} // Total # of tombstones in database
+ public long CountOfConflicts \{ get; set; \} // Total # of conflicts in database
+ public long CountOfAttachments \{ get; set; \} // Total # of attachments in database
+ public long CountOfUniqueAttachments \{ get; set; \} // Total # of unique attachments in database
+ public long CountOfCounterEntries \{ get; set; \} // Total # of counter-group entries in database
+ public long CountOfTimeSeriesSegments \{ get; set; \} // Total # of time-series segments in database
+
+ // List of stale index names in database
+ public string[] StaleIndexes => Indexes?.Where(x => x.IsStale).Select(x => x.Name).ToArray();
+ // Statistics for each index in database
+ public IndexInformation[] Indexes \{ get; set; \}
+
+ public string DatabaseChangeVector \{ get; set; \} // Global change vector of the database
+ public string DatabaseId \{ get; set; \} // Database identifier
+ public bool Is64Bit \{ get; set; \} // Indicates if process is 64-bit
+ public string Pager \{ get; set; \} // Component handling the memory-mapped files
+ public DateTime? LastIndexingTime \{ get; set; \} // Last time of indexing an item
+ public Size SizeOnDisk \{ get; set; \} // Database size on disk
+ public Size TempBuffersSizeOnDisk \{ get; set; \} // Temp buffers size on disk
+ public int NumberOfTransactionMergerQueueOperations \{ get; set; \}
+\}
+`}
+
+
+
+
+
+## Get detailed database statistics
+
+To get **detailed database statistics**, use `GetDetailedStatisticsOperation`:
+
+
+{`// Pass an instance of class \`GetDetailedStatisticsOperation\` to the store
+DetailedDatabaseStatistics stats =
+ store.Maintenance.Send(new GetDetailedStatisticsOperation());
+`}
+
+
+Statistics are returned in the `DetailedDatabaseStatistics` object.
+
+
+{`// Detailed database stats results:
+public class DetailedDatabaseStatistics : DatabaseStatistics
+\{
+ // Total # of identities in database
+ public long CountOfIdentities \{ get; set; \}
+ // Total # of compare-exchange items in database
+ public long CountOfCompareExchange \{ get; set; \}
+ // Total # of cmpXchg tombstones in database
+ public long CountOfCompareExchangeTombstones \{ get; set; \}
+ // Total # of TS deleted ranges values in database
+ public long CountOfTimeSeriesDeletedRanges \{ get; set; \}
+\}
+`}
+
+
+
+
+
+## Get statistics for another database
+
+* By default, you get statistics for the database defined in your Document Store.
+* Use `ForDatabase` to get database and collection statistics for another database.
+* `ForDatabase` can be used with **any** of the above statistics options.
+
+
+
+{`// Get stats for 'AnotherDatabase':
+DatabaseStatistics stats =
+ store.Maintenance.ForDatabase("AnotherDatabase").Send(new GetStatisticsOperation());
+`}
+
+
+
+* Learn more about switching operations to another database [here](../../../client-api/operations/how-to/switch-operations-to-a-different-database.mdx).
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/_get-stats-java.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/_get-stats-java.mdx
new file mode 100644
index 0000000000..be3f5037de
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/_get-stats-java.mdx
@@ -0,0 +1,195 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Statistics can be retrieved for the database and for collections.
+
+* By default, statistics are retrieved for the database defined in the Document Store.
+ To get database and collection statistics for another database use [ForDatabase](../../../client-api/operations/maintenance/get-stats.mdx#get-stats-for-another-database).
+
+* In this page:
+ * [Get collection stats](../../../client-api/operations/maintenance/get-stats.mdx#get-collection-stats)
+ * [Get detailed collection stats](../../../client-api/operations/maintenance/get-stats.mdx#get-detailed-collection-stats)
+ * [Get database stats](../../../client-api/operations/maintenance/get-stats.mdx#get-database-stats)
+ * [Get detailed database stats](../../../client-api/operations/maintenance/get-stats.mdx#get-detailed-database-stats)
+ * [Get stats for another database](../../../client-api/operations/maintenance/get-stats.mdx#get-stats-for-another-database)
+
+## Get collection stats
+
+Use `GetCollectionStatisticsOperation` to get **collection stats**.
+
+
+{`// Pass an instance of class \`GetCollectionStatisticsOperation\` to the store
+CollectionStatistics stats =
+ store.maintenance().send(new GetCollectionStatisticsOperation());
+`}
+
+
+
+
+Stats are returned in the `CollectionStatistics` object.
+
+
+{`public class CollectionStatistics \{
+// Collection stats results:
+ // Total # of documents in all collections
+ int CountOfDocuments;
+ // Total # of conflicts
+ int CountOfConflicts;
+ // Total # of documents per collection
+ Map Collections;
+\}
+`}
+
+
+
+
+
+## Get detailed collection stats
+
+Use `GetDetailedCollectionStatisticsOperation` to get **detailed collection stats**.
+
+
+{`// Pass an instance of class \`GetDetailedCollectionStatisticsOperation\` to the store
+DetailedCollectionStatistics stats =
+ store.maintenance().send(new GetDetailedCollectionStatisticsOperation());
+`}
+
+
+
+
+Stats are returned in the `DetailedCollectionStatistics` object.
+
+
+{`// Detailed collection stats results:
+public class DetailedCollectionStatistics \{
+ // Total # of documents in all collections
+ long CountOfDocuments;
+ // Total # of conflicts
+ long CountOfConflicts;
+ // Collection details per collection
+ Map Collections;
+\}
+
+// Details per collection
+public class CollectionDetails \{
+ String Name;
+ long CountOfDocuments;
+ Size Size;
+ Size DocumentsSize;
+ Size TombstonesSize;
+ Size RevisionsSize;
+\}
+`}
+
+
+
+
+
+## Get database stats
+
+Use `GetStatisticsOperation` to get **database stats**.
+
+
+{`// Pass an instance of class \`GetStatisticsOperation\` to the store
+DatabaseStatistics stats =
+ store.maintenance().send(new GetStatisticsOperation());
+`}
+
+
+
+
+Stats are returned in the `DatabaseStatistics` object.
+
+
+{`// Database stats results:
+public class DatabaseStatistics \{
+ Long LastDocEtag; // Last document etag in database
+ Long LastDatabaseEtag; // Last database etag
+
+ int CountOfIndexes; // Total # of indexes in database
+ long CountOfDocuments; // Total # of documents in database
+ long CountOfRevisionDocuments; // Total # of revision documents in database
+ long CountOfDocumentsConflicts; // Total # of documents conflicts in database
+ long CountOfTombstones; // Total # of tombstones in database
+ long CountOfConflicts; // Total # of conflicts in database
+ long CountOfAttachments; // Total # of attachments in database
+ long CountOfUniqueAttachments; // Total # of unique attachments in database
+ long CountOfCounterEntries; // Total # of counter-group entries in database
+ long CountOfTimeSeriesSegments; // Total # of time-series segments in database
+
+ IndexInformation[] Indexes; // Statistics for each index in database
+
+ String DatabaseChangeVector; // Global change vector of the database
+ String DatabaseId; // Database identifier
+ boolean Is64Bit; // Indicates if process is 64-bit
+ String Pager; // Component handling the memory-mapped files
+ Date LastIndexingTime; // Last time of indexing an item
+ Size SizeOnDisk; // Database size on disk
+ Size TempBuffersSizeOnDisk; // Temp buffers size on disk
+ int NumberOfTransactionMergerQueueOperations;
+\}
+`}
+
+
+
+
+
+## Get detailed database stats
+
+Use `GetDetailedStatisticsOperation` to get **detailed database stats**.
+
+
+{`// Pass an instance of class \`GetDetailedStatisticsOperation\` to the store
+DetailedDatabaseStatistics stats =
+ store.maintenance().send(new GetDetailedStatisticsOperation());
+`}
+
+
+
+
+Stats are returned in the `DetailedDatabaseStatistics` object.
+
+
+{`// Detailed database stats results:
+public class DetailedDatabaseStatistics extends DatabaseStatistics \{
+ // Total # of identities in database
+ long CountOfIdentities;
+ // Total # of compare-exchange items in database
+ long CountOfCompareExchange;
+ // Total # of cmpXchg tombstones in database
+ long CountOfCompareExchangeTombstones;
+ // Total # of TS deleted ranges values in database
+ long CountOfTimeSeriesDeletedRanges;
+\}
+`}
+
+
+
+
+
+## Get stats for another database
+
+
+* By default, you get stats for the database defined in your Document Store.
+* Use `forDatabase` to get database & collection stats for another database.
+* 'ForDatabase' can be used with **any** of the above stats options.
+
+
+
+{`// Get stats for 'AnotherDatabase':
+DatabaseStatistics stats =
+ store.maintenance().forDatabase("AnotherDatabase").send(new GetStatisticsOperation());
+`}
+
+
+
+* Learn more about switching operations to another database [here](../../../client-api/operations/how-to/switch-operations-to-a-different-database.mdx).
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/_get-stats-nodejs.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/_get-stats-nodejs.mdx
new file mode 100644
index 0000000000..6e68a6b1e9
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/_get-stats-nodejs.mdx
@@ -0,0 +1,200 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Statistics can be retrieved for the database and for collections.
+
+* By default, statistics are retrieved for the database defined in the Document Store.
+ To get database and collection statistics for another database use [forDatabase](../../../client-api/operations/maintenance/get-stats.mdx#get-stats-for-another-database).
+
+* In this page:
+ * [Get collection statistics](../../../client-api/operations/maintenance/get-stats.mdx#get-collection-statistics)
+ * [Get detailed collection statistics](../../../client-api/operations/maintenance/get-stats.mdx#get-detailed-collection-statistics)
+ * [Get database statistics](../../../client-api/operations/maintenance/get-stats.mdx#get-database-statistics)
+ * [Get detailed database statistics](../../../client-api/operations/maintenance/get-stats.mdx#get-detailed-database-statistics)
+ * [Get statistics for another database](../../../client-api/operations/maintenance/get-stats.mdx#get-statistics-for-another-database)
+
+## Get collection statistics
+
+To get **collection statistics**, use `GetCollectionStatisticsOperation`:
+
+
+{`// Pass an instance of class \`GetCollectionStatisticsOperation\` to the store
+const stats = await store.maintenance.send(new GetCollectionStatisticsOperation());
+`}
+
+
+Statistics are returned in the `CollectionStatistics` object.
+
+
+{`// Object with following props is returned:
+\{
+ // Total # of documents in all collections
+ countOfDocuments,
+ // Total # of conflicts
+ countOfConflicts,
+ // Dictionary with total # of documents per collection
+ collections
+\}
+`}
+
+
+
+
+
+## Get detailed collection statistics
+
+To get **detailed collection statistics**, use `GetDetailedCollectionStatisticsOperation`:
+
+
+{`// Object with following props is returned:
+\{
+ // Total # of documents in all collections
+ countOfDocuments,
+ // Total # of conflicts
+ countOfConflicts,
+ // Dictionary with 'collection details per collection'
+ collections,
+\}
+
+// 'Collection details per collection' object props:
+\{
+ name,
+ countOfDocuments,
+ size,
+ documentsSize,
+ tombstonesSize,
+ revisionsSize
+\}
+`}
+
+
+Statistics are returned in the `DetailedCollectionStatistics` object.
+
+
+{`class Size:
+ def __init__(self, size_in_bytes: int = None, human_size: str = None): ...
+
+class CollectionDetails:
+ def __init__(
+ self,
+ name: str = None,
+ count_of_documents: int = None,
+ size: Size = None,
+ documents_size: Size = None,
+ tombstones_size: Size = None,
+ revisions_size: Size = None,
+ ): ...
+
+class DetailedCollectionStatistics:
+ def __init__(
+ self,
+ count_of_documents: int = None,
+ count_of_conflicts: int = None,
+ collections: Dict[str, CollectionDetails] = None,
+ ) -> None: ...
+`}
+
+
+
+
+
+## Get database statistics
+
+To get **database statistics**, use `GetStatisticsOperation`:
+
+
+{`// Pass an instance of class \`GetStatisticsOperation\` to the store
+const stats = await store.maintenance.send(new GetStatisticsOperation());
+`}
+
+
+Statistics are returned in the `DatabaseStatistics` object.
+
+
+{`// Object with following props is returned:
+\{
+ lastDocEtag, // Last document etag in database
+ lastDatabaseEtag, // Last database etag
+
+ countOfIndexes, // Total # of indexes in database
+ countOfDocuments, // Total # of documents in database
+ countOfRevisionDocuments, // Total # of revision documents in database
+ countOfDocumentsConflicts, // Total # of documents conflicts in database
+ countOfTombstones, // Total # of tombstones in database
+ countOfConflicts, // Total # of conflicts in database
+ countOfAttachments, // Total # of attachments in database
+ countOfUniqueAttachments, // Total # of unique attachments in database
+ countOfCounterEntries, // Total # of counter-group entries in database
+ countOfTimeSeriesSegments, // Total # of time-series segments in database
+
+ indexes, // Statistics for each index in database (array of IndexInformation)
+
+ databaseChangeVector, // Global change vector of the database
+ databaseId, // Database identifier
+ is64Bit, // Indicates if process is 64-bit
+ pager, // Component handling the memory-mapped files
+ lastIndexingTime, // Last time of indexing an item
+ sizeOnDisk, // Database size on disk
+ tempBuffersSizeOnDisk, // Temp buffers size on disk
+ numberOfTransactionMergerQueueOperations
+\}
+`}
+
+
+
+
+
+## Get detailed database statistics
+
+To get **detailed database statistics**, use `GetDetailedStatisticsOperation`:
+
+
+{`// Pass an instance of class \`GetDetailedStatisticsOperation\` to the store
+const stats = await store.maintenance.send(new GetDetailedStatisticsOperation());
+`}
+
+
+Statistics are returned in the `DetailedDatabaseStatistics` object.
+
+
+{`// Resulting object contains all database stats props from above and the following in addition:
+\{
+ // Total # of identities in database
+ countOfIdentities,
+ // Total # of compare-exchange items in database
+ countOfCompareExchange,
+ // Total # of cmpXchg tombstones in database
+ countOfCompareExchangeTombstones,
+ // Total # of TS deleted ranges values in database
+ countOfTimeSeriesDeletedRanges
+\}
+`}
+
+
+
+
+
+## Get statistics for another database
+
+* By default, you get statistics for the database defined in your Document Store.
+* Use `forDatabase` to get database and collection statistics for another database.
+* `forDatabase` can be used with **any** of the above statistics options.
+
+
+
+{`// Get stats for 'AnotherDatabase':
+const stats =
+ await store.maintenance.forDatabase("AnotherDatabase").send(new GetStatisticsOperation());
+`}
+
+
+
+* Learn more about switching operations to another database [here](../../../client-api/operations/how-to/switch-operations-to-a-different-database.mdx).
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/_get-stats-php.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/_get-stats-php.mdx
new file mode 100644
index 0000000000..ac19ad0831
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/_get-stats-php.mdx
@@ -0,0 +1,211 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Statistics can be retrieved for the database and for collections.
+
+* By default, statistics are retrieved for the database defined in the Document Store.
+ To get database and collection statistics for another database use [forDatabase](../../../client-api/operations/maintenance/get-stats.mdx#get-stats-for-another-database).
+
+* In this page:
+ * [Get collection statistics](../../../client-api/operations/maintenance/get-stats.mdx#get-collection-statistics)
+ * [Get detailed collection statistics](../../../client-api/operations/maintenance/get-stats.mdx#get-detailed-collection-statistics)
+ * [Get database statistics](../../../client-api/operations/maintenance/get-stats.mdx#get-database-statistics)
+ * [Get detailed database statistics](../../../client-api/operations/maintenance/get-stats.mdx#get-detailed-database-statistics)
+ * [Get statistics for another database](../../../client-api/operations/maintenance/get-stats.mdx#get-statistics-for-another-database)
+
+## Get collection statistics
+
+To get **collection statistics**, use `GetCollectionStatisticsOperation`:
+
+
+{`// Pass an instance of class \`GetCollectionStatisticsOperation\` to the store
+/** @var CollectionStatistics $stats */
+$stats = $store->maintenance()->send((new GetCollectionStatisticsOperation());
+`}
+
+
+Statistics are returned in the `CollectionStatistics` object.
+
+
+{`// Collection stats results:
+class CollectionStatistics
+\{
+ // Total # of documents in all collections
+ private ?int $countOfDocuments = null;
+ // Total # of conflicts
+ private ?int $countOfConflicts = null;
+ // Total # of documents per collection
+ private array $collections = [];
+
+ // ... getters and setters
+\}
+`}
+
+
+
+
+
+## Get detailed collection statistics
+
+To get **detailed collection statistics**, use `GetDetailedCollectionStatisticsOperation`:
+
+
+{`// Pass an instance of class \`GetDetailedCollectionStatisticsOperation\` to the store
+/** @var DetailedCollectionStatistics $stats */
+$stats = $store->maintenance()->send(new GetDetailedCollectionStatisticsOperation());
+`}
+
+
+Statistics are returned in the `DetailedCollectionStatistics` object.
+
+
+{`// Detailed collection stats results:
+public class DetailedCollectionStatistics
+\{
+ // Total # of documents in all collections
+ public long CountOfDocuments \{ get; set; \}
+ // Total # of conflicts
+ public long CountOfConflicts \{ get; set; \}
+ // Collection details per collection
+ public Dictionary Collections \{ get; set; \}
+\}
+
+// Details per collection
+class CollectionDetails
+\{
+ private ?string $name = null;
+ private ?int $countOfDocuments = null;
+ private ?Size $size = null;
+ private ?Size $documentsSize = null;
+ private ?Size $tombstonesSize = null;
+ private ?Size $revisionsSize = null;
+
+ // ... getters and setters
+\}
+`}
+
+
+
+
+
+## Get database statistics
+
+To get **database statistics**, use `GetStatisticsOperation`:
+
+
+{`// Pass an instance of class \`GetStatisticsOperation\` to the store
+/** @var DatabaseStatistics $stats */
+$stats = $store->maintenance()->send(new GetStatisticsOperation());
+`}
+
+
+Statistics are returned in the `DatabaseStatistics` object.
+
+
+{`// Database stats results:
+class DatabaseStatistics implements ResultInterface
+\{
+ private ?int $lastDocEtag = null; // Last document etag in database
+ private ?int $lastDatabaseEtag = null; // Last database etag
+
+ private ?int $countOfIndexes = null; // Total # of indexes in database
+ private ?int $countOfDocuments = null; // Total # of documents in database
+ private ?int $countOfRevisionDocuments = null; // Total # of revision documents in database
+ private ?int $countOfDocumentsConflicts = null; // Total # of documents conflicts in database
+ private ?int $countOfTombstones = null; // Total # of tombstones in database
+ private ?int $countOfConflicts = null; // Total # of conflicts in database
+ private ?int $countOfAttachments = null; // Total # of attachments in database
+ private ?int $countOfUniqueAttachments = null; // Total # of unique attachments in database
+ private ?int $countOfCounterEntries = null; // Total # of counter-group entries in database
+ private ?int $countOfTimeSeriesSegments = null; // Total # of time-series segments in database
+
+ // List of stale index names in database
+ public function getStaleIndexes(): IndexInformationArray
+ \{
+ return IndexInformationArray::fromArray(
+ array_map(
+ function (IndexInformation $index) \{
+ return $index->isStale();
+ \},
+ $this->indexes->getArrayCopy())
+ );
+ \}
+
+ // Statistics for each index in database
+ private ?IndexInformationArray $indexes = null;
+
+ private ?string $databaseChangeVector = null; // Global change vector of the database
+ private ?string $databaseId = null; // Database identifier
+ private bool $is64Bit = false; // Indicates if process is 64-bit
+ private ?string $pager = null; // Component handling the memory-mapped files
+ private ?DateTimeInterface $lastIndexingTime = null; // Last time of indexing an item
+ private ?Size $sizeOnDisk = null; // Database size on disk
+ private ?Size $tempBuffersSizeOnDisk = null; // Temp buffers size on disk
+ private ?int $numberOfTransactionMergerQueueOperations = null;
+
+ // ... getters and setters
+\}
+`}
+
+
+
+
+
+## Get detailed database statistics
+
+To get **detailed database statistics**, use `GetDetailedStatisticsOperation`:
+
+
+{`// Pass an instance of class \`GetDetailedStatisticsOperation\` to the store
+/** @var DetailedDatabaseStatistics $stats */
+$stats = $store->maintenance()->send(new GetDetailedStatisticsOperation());
+`}
+
+
+Statistics are returned in the `DetailedDatabaseStatistics` object.
+
+
+{`// Detailed database stats results:
+class DetailedDatabaseStatistics extends DatabaseStatistics implements ResultInterface
+\{
+ // Total # of identities in database
+ private ?int $countOfIdentities = null;
+ // Total # of compare-exchange items in database
+ private ?int $countOfCompareExchange = null;
+ // Total # of cmpXchg tombstones in database
+ private ?int $countOfCompareExchangeTombstones = null;
+ // Total # of TS deleted ranges values in database
+ private ?int $countOfTimeSeriesDeletedRanges = null;
+
+ // ... getters and setters
+\}
+`}
+
+
+
+
+
+## Get statistics for another database
+
+* By default, you get statistics for the database defined in your Document Store.
+* Use `forDatabase` to get database and collection statistics for another database.
+* `forDatabase` can be used with **any** of the above statistics options.
+
+
+
+{`// Get stats for 'AnotherDatabase':
+/** @var DatabaseStatistics $stats */
+$stats = $store->maintenance()->forDatabase("AnotherDatabase")->send(new GetStatisticsOperation());
+`}
+
+
+
+* Learn more about switching operations to another database [here](../../../client-api/operations/how-to/switch-operations-to-a-different-database.mdx).
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/_get-stats-python.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/_get-stats-python.mdx
new file mode 100644
index 0000000000..38b373cdd3
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/_get-stats-python.mdx
@@ -0,0 +1,195 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Statistics can be retrieved for the database and for collections.
+
+* By default, statistics are retrieved for the database defined in the Document Store.
+ To get database and collection statistics for another database use [for_database](../../../client-api/operations/maintenance/get-stats.mdx#get-stats-for-another-database).
+
+* In this page:
+ * [Get collection statistics](../../../client-api/operations/maintenance/get-stats.mdx#get-collection-statistics)
+ * [Get detailed collection statistics](../../../client-api/operations/maintenance/get-stats.mdx#get-detailed-collection-statistics)
+ * [Get database statistics](../../../client-api/operations/maintenance/get-stats.mdx#get-database-statistics)
+ * [Get detailed database statistics](../../../client-api/operations/maintenance/get-stats.mdx#get-detailed-database-statistics)
+ * [Get statistics for another database](../../../client-api/operations/maintenance/get-stats.mdx#get-statistics-for-another-database)
+
+## Get collection statistics
+
+To get **collection statistics**, use `GetCollectionStatisticsOperation`:
+
+
+{`# Pass an instance of class 'GetCollectionStatisticsOperation' to the store
+stats = store.maintenance.send(GetCollectionStatisticsOperation())
+`}
+
+
+Statistics are returned in the `CollectionStatistics` object.
+
+
+{`class CollectionStatistics:
+ def __init__(
+ self,
+ count_of_documents: Optional[int] = None,
+ count_of_conflicts: Optional[int] = None,
+ collections: Optional[Dict[str, int]] = None,
+ ): ...
+`}
+
+
+
+
+
+## Get detailed collection statistics
+
+To get **detailed collection statistics**, use `GetDetailedCollectionStatisticsOperation`:
+
+
+{`# Pass an instance of class 'GetDetailedCollectionStatisticsOperation' to the store
+stats = store.maintenance.send(GetDetailedCollectionStatisticsOperation())
+`}
+
+
+Statistics are returned in the `DetailedCollectionStatistics` object.
+
+
+{`class Size:
+ def __init__(self, size_in_bytes: int = None, human_size: str = None): ...
+
+class CollectionDetails:
+ def __init__(
+ self,
+ name: str = None,
+ count_of_documents: int = None,
+ size: Size = None,
+ documents_size: Size = None,
+ tombstones_size: Size = None,
+ revisions_size: Size = None,
+ ): ...
+
+class DetailedCollectionStatistics:
+ def __init__(
+ self,
+ count_of_documents: int = None,
+ count_of_conflicts: int = None,
+ collections: Dict[str, CollectionDetails] = None,
+ ) -> None: ...
+`}
+
+
+
+
+
+## Get database statistics
+
+To get **database statistics**, use `GetStatisticsOperation`:
+
+
+{`# Pass an instance of class 'GetStatisticsOperation' to the store
+stats = store.maintenance.send(GetStatisticsOperation())
+`}
+
+
+Statistics are returned in the `DatabaseStatistics` object.
+
+
+{`class DatabaseStatistics:
+ def __init__(
+ self,
+ last_doc_etag: int = None,
+ last_database_etag: int = None,
+ count_of_indexes: int = None,
+ count_of_documents: int = None,
+ count_of_revision_documents: int = None,
+ count_of_documents_conflicts: int = None,
+ count_of_tombstones: int = None,
+ count_of_conflicts: int = None,
+ count_of_attachments: int = None,
+ count_of_unique_attachments: int = None,
+ count_of_counter_entries: int = None,
+ count_of_time_series_segments: int = None,
+ indexes: List[IndexInformation] = None,
+ database_change_vector: str = None,
+ database_id: str = None,
+ is_64_bit: bool = None,
+ pager: str = None,
+ last_indexing_time: datetime.datetime = None,
+ size_on_disk: Size = None,
+ temp_buffers_size_on_disk: Size = None,
+ number_of_transaction_merger_queue_operations: int = None,
+ ): ...
+`}
+
+
+
+
+
+## Get detailed database statistics
+
+To get **detailed database statistics**, use `GetDetailedStatisticsOperation`:
+
+
+{`# Pass an instance of class 'GetDetailedStatisticsOperation' to the store
+stats = store.maintenance.send(GetDetailedStatisticsOperation())
+`}
+
+
+Statistics are returned in the `DetailedDatabaseStatistics` object.
+
+
+{`class DetailedDatabaseStatistics(DatabaseStatistics):
+ def __init__(
+ self,
+ last_doc_etag: int = None,
+ last_database_etag: int = None,
+ count_of_indexes: int = None,
+ count_of_documents: int = None,
+ count_of_revision_documents: int = None,
+ count_of_documents_conflicts: int = None,
+ count_of_tombstones: int = None,
+ count_of_conflicts: int = None,
+ count_of_attachments: int = None,
+ count_of_unique_attachments: int = None,
+ count_of_counter_entries: int = None,
+ count_of_time_series_segments: int = None,
+ indexes: List[IndexInformation] = None,
+ database_change_vector: str = None,
+ database_id: str = None,
+ is_64_bit: bool = None,
+ pager: str = None,
+ last_indexing_time: datetime.datetime = None,
+ size_on_disk: Size = None,
+ temp_buffers_size_on_disk: Size = None,
+ number_of_transaction_merger_queue_operations: int = None,
+ count_of_identities: int = None, # Total # of identities in database
+ count_of_compare_exchange: int = None, # Total # of compare-exchange items in database
+ count_of_compare_exchange_tombstones: int = None, # Total # of cmpXchg tombstones in database
+ ): ...
+`}
+
+
+
+
+
+## Get statistics for another database
+
+* By default, you get statistics for the database defined in your Document Store.
+* Use `for_database` to get database and collection statistics for another database.
+* `for_database` can be used with **any** of the above statistics options.
+
+
+
+{`# Get stats for 'AnotherDatabase'
+stats = store.maintenance.for_database("AnotherDatabase").send(GetStatisticsOperation())
+`}
+
+
+
+* Learn more about switching operations to another database [here](../../../client-api/operations/how-to/switch-operations-to-a-different-database.mdx).
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/assets/clean-change-vector-after.png b/versioned_docs/version-7.1/client-api/operations/maintenance/assets/clean-change-vector-after.png
new file mode 100644
index 0000000000..5ed0688eb5
Binary files /dev/null and b/versioned_docs/version-7.1/client-api/operations/maintenance/assets/clean-change-vector-after.png differ
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/assets/clean-change-vector.png b/versioned_docs/version-7.1/client-api/operations/maintenance/assets/clean-change-vector.png
new file mode 100644
index 0000000000..9962658885
Binary files /dev/null and b/versioned_docs/version-7.1/client-api/operations/maintenance/assets/clean-change-vector.png differ
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/backup/_category_.json b/versioned_docs/version-7.1/client-api/operations/maintenance/backup/_category_.json
new file mode 100644
index 0000000000..578bdcc591
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/backup/_category_.json
@@ -0,0 +1,4 @@
+{
+ "position": 6,
+ "label": Backup,
+}
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/backup/backup-overview.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/backup/backup-overview.mdx
new file mode 100644
index 0000000000..d0252ef9ca
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/backup/backup-overview.mdx
@@ -0,0 +1,598 @@
+---
+title: "Backup"
+hide_table_of_contents: true
+sidebar_label: Backup
+sidebar_position: 0
+---
+
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+# Backup
+
+
+* Create a backup of your data to secure it or to preserve a copy of it in its current state for future reference.
+
+* RavenDB's Backup task is an [Ongoing-Task](../../../../studio/database/tasks/ongoing-tasks/general-info.mdx)
+ designed to run periodically on a pre-defined schedule.
+ You can run it as a one-time operation as well, by using [Export](../../../../client-api/smuggler/what-is-smuggler.mdx#export)
+ or executing a backup task [immediately](../../../../client-api/operations/maintenance/backup/backup-overview.mdx#initiate-immediate-backup-execution).
+
+* On a [sharded](../../../../sharding/overview.mdx) database, a single backup task
+ is defined by the user for all shards, and RavenDB automatically defines
+ sub-tasks that create backups per shard.
+ Read about backups on a sharded database [in the section dedicated to it](../../../../sharding/backup-and-restore/backup.mdx).
+
+* In this page:
+ * [Backup Types](../../../../client-api/operations/maintenance/backup/backup-overview.mdx#backup-types)
+ * [Logical-Backup](../../../../client-api/operations/maintenance/backup/backup-overview.mdx#logical-backup)
+ * [Snapshot](../../../../client-api/operations/maintenance/backup/backup-overview.mdx#snapshot)
+ * [Backup Scope](../../../../client-api/operations/maintenance/backup/backup-overview.mdx#backup-scope)
+ * [Full Backup](../../../../client-api/operations/maintenance/backup/backup-overview.mdx#full-backup)
+ * [Incremental Backup](../../../../client-api/operations/maintenance/backup/backup-overview.mdx#incremental-backup)
+ * [Backup to Local and Remote Destinations](../../../../client-api/operations/maintenance/backup/backup-overview.mdx#backup-to-local-and-remote-destinations)
+ * [Backup Retention Policy](../../../../client-api/operations/maintenance/backup/backup-overview.mdx#backup-retention-policy)
+ * [Server-Wide Backup](../../../../client-api/operations/maintenance/backup/backup-overview.mdx#server-wide-backup)
+ * [Initiate Immediate Backup Execution](../../../../client-api/operations/maintenance/backup/backup-overview.mdx#initiate-immediate-backup-execution)
+ * [Delay Backup Execution](../../../../client-api/operations/maintenance/backup/backup-overview.mdx#delay-backup-execution)
+ * [Recommended Precautions](../../../../client-api/operations/maintenance/backup/backup-overview.mdx#recommended-precautions)
+
+
+## Backup Types
+
+#### Logical-Backup
+
+* Data, index definitions, and ongoing tasks are backed-up in [compressed](../../../../server/ongoing-tasks/backup-overview.mdx#compression)
+ JSON files.
+
+* During the restoration, RavenDB -
+ * Re-inserts all data into the database.
+ * Inserts the saved index definitions. To save space, Logical Backup stores index definitions only.
+ After restoration, the dataset is scanned and indexed according to the definitions.
+
+* See [backup contents](../../../../server/ongoing-tasks/backup-overview.mdx#backup-contents).
+
+* Restoration time is, therefore, **slower** than when restoring from a Snapshot.
+
+* The backup file size is **significantly smaller** than that of a Snapshot.
+
+* In addition to full data backup, Logical Backups can be defined as **incremental**,
+ saving any changes made since the previous backup.
+
+* The following code sample defines a full-backup task that would be executed every 3 hours:
+
+
+{`var config = new PeriodicBackupConfiguration
+\{
+ LocalSettings = new LocalSettings
+ \{
+ // Local path for storing the backup
+ FolderPath = @"E:\\RavenBackups"
+ \},
+
+ // Full Backup period (Cron expression for a 3-hours period)
+ FullBackupFrequency = "0 */3 * * *",
+
+ // Set backup type to Logical-Backup
+ BackupType = BackupType.Backup,
+
+ // Task Name
+ Name = "fullBackupTask",
+\};
+var operation = new UpdatePeriodicBackupOperation(config);
+var result = await docStore.Maintenance.SendAsync(operation);
+`}
+
+
+ Note the usage of [Cron scheduling](https://en.wikipedia.org/wiki/Cron) when setting backup frequency.
+#### Snapshot
+
+* A Snapshot is a compressed binary duplication of the full database structure.
+ This includes the data file and the journals at a given point in time.
+ Therefore it includes fully built indexes and ongoing tasks.
+ See [file structure](../../../../server/storage/directory-structure.mdx#storage--directory-structure) for more info.
+
+* Snapshot backups are available only for **Enterprise subscribers**.
+
+* During restoration -
+ * Re-inserting data into the database is not required.
+ * Re-indexing is not required.
+
+* See [backup contents](../../../../server/ongoing-tasks/backup-overview.mdx#backup-contents).
+
+* Restoration is typically **faster** than that of a logical backup.
+
+* Snapshot size is typically **larger** than that of a logical backup.
+
+* If Incremental backups are created for a Snapshot-type backup:
+ * The first backup will be a full Snapshot.
+ * The following backups will be Incremental.
+ * [Incremental backups](../../../../client-api/operations/maintenance/backup/backup-overview.mdx#incremental-backup)
+ have different storage contents than Snapshots.
+
+* Code Sample:
+
+
+{`// Set backup type to Snapshot
+BackupType = BackupType.Snapshot,
+`}
+
+
+#### Basic Comparison Between a Logical-Backup and a Snapshot:
+
+ | Backup Type | Stored Format | Restoration speed | Size |
+ | ------ | ------ | --- | --- |
+ | Snapshot | Compressed Binary Image | Fast | Larger than a logical-backup |
+ | Logical backup | Compressed Textual Data - JSON | Slow | Smaller than a Snapshot |
+
+
+Verify that RavenDB is allowed to store files in the path set in `LocalSettings.FolderPath`.
+
+
+
+
+
+
+## Backup Scope
+
+As described in [the overview](../../../../server/ongoing-tasks/backup-overview.mdx#backing-up-and-restoring-a-database), a backup task can create **full** and **incremental** backups.
+
+* A Backup Task can be defined to create either a full data backup or an incremental backup.
+ In both cases, the backup task adds a single new backup file to the backup folder each time it runs,
+ leaving the existing backup files untouched.
+#### Full-Backup
+
+
+* **File Format**
+ A full-backup is a **compressed JSON file** if it is a logical
+ backup, or a **compressed binary file** if it is a snapshot.
+
+* **Task Ownership**
+ There are no preliminary conditions for creating a full-backup.
+ Any node can perform this task.
+
+* **To run a full-backup**
+ Set `FullBackupFrequency`.
+
+
+{`// A full-backup will run every 6-hours (Cron expression)
+FullBackupFrequency = "0 */6 * * *",
+`}
+
+
+#### Incremental-Backup
+
+* **File Format and Notes About Contents**
+ * An incremental-backup file is **always in JSON format**.
+ It is so even when the full-backup it is associated with is a binary snapshot.
+ * An incremental backup stores index definitions (not full indexes).
+ After the backup is restored, the dataset is re-indexed according to the index definitions.
+
+ This initial re-indexing can be time-consuming on large datasets.
+
+ * An incremental backup doesn't store [change vectors](../../../../server/clustering/replication/change-vector.mdx).
+
+
+* **Task Ownership**
+ The ownership of an incremental-backup task is granted dynamically by the cluster.
+ An incremental-backup can be executed only by the same node that currently owns the backup task.
+ A node can run an incremental-backup, only after running full-backup at least once.
+
+* **To run an incremental-backup**
+ Set `IncrementalBackupFrequency`.
+
+
+
+{`// An incremental-backup will run every 20 minutes (Cron expression)
+IncrementalBackupFrequency = "*/20 * * * *",
+`}
+
+
+
+
+
+## Backup to Local and Remote Destinations
+
+* Backups can be made **locally**, as well as to a set of **remote locations** including -
+ * A network path
+ * An FTP/SFTP target
+ * Azure Storage
+ * Amazon S3
+ * Amazon Glacier
+ * Google Cloud
+
+* RavenDB will store data in a local folder first, and transfer it to the remote
+ destination from the local one.
+ * If a local folder hasn't been specified, RavenDB will use the
+ temp folder defined in its [Storage.TempPath](../../../../server/configuration/storage-configuration.mdx#storagetemppath) setting.
+ If _Storage.TempPath_ is not defined, the temporary files
+ will be created at the same location as the data file.
+ In either case, the folder will be used as temporary storage
+ and the local files deleted from it when the transfer is completed.
+ * If a local folder **has** been specified, RavenDB will use it both
+ for the transfer and as its permanent local backup location.
+
+* Local and Remote Destinations Settings Code Sample:
+
+
+{`var config = new PeriodicBackupConfiguration
+\{
+ LocalSettings = new LocalSettings
+ \{
+ FolderPath = @"E:\\RavenBackups"
+ \},
+
+ // FTP Backup settings
+ FtpSettings = new FtpSettings
+ \{
+ Url = "192.168.10.4:8080",
+ UserName = "John",
+ Password = "JohnDoe38"
+ \},
+
+ // Azure Backup settings
+ AzureSettings = new AzureSettings
+ \{
+ StorageContainer = "storageContainer",
+ RemoteFolderName = "remoteFolder",
+ AccountName = "JohnAccount",
+ AccountKey = "key"
+ \},
+
+ // Amazon S3 bucket settings.
+ S3Settings = new S3Settings
+ \{
+ AwsAccessKey = "your access key here",
+ AwsSecretKey = "your secret key here",
+ AwsRegionName = "OPTIONAL",
+ BucketName = "john-bucket"
+ \},
+
+ // Amazon Glacier settings.
+ GlacierSettings = new GlacierSettings
+ \{
+ AwsAccessKey = "your access key here",
+ AwsSecretKey = "your secret key here",
+ AwsRegionName = "OPTIONAL",
+ VaultName = "john-glacier",
+ RemoteFolderName = "john/backups"
+ \},
+
+ // Google Cloud Backup settings
+ GoogleCloudSettings = new GoogleCloudSettings
+ \{
+ BucketName = "RavenBucket",
+ RemoteFolderName = "BackupFolder",
+ GoogleCredentialsJson = "GoogleCredentialsJson"
+ \}
+
+\};
+var operation = new UpdatePeriodicBackupOperation(config);
+var result = await docStore.Maintenance.SendAsync(operation);
+`}
+
+
+
+
+ Use AWS [IAM](https://aws.amazon.com/iam/) (Identity and Access Management)
+ to restrict users access while they create backups.
+ E.g. -
+
+
+{`\{
+ "Version": "2012-10-17",
+ "Statement": [
+ \{
+ "Sid": "VisualEditor0",
+ "Effect": "Allow",
+ "Action": "s3:PutObject",
+ "Resource": "arn:aws:s3:::BUCKET_NAME/*"
+ \},
+ \{
+ "Sid": "VisualEditor1",
+ "Effect": "Allow",
+ "Action": [
+ "s3:ListBucket",
+ "s3:GetBucketAcl",
+ "s3:GetBucketLocation"
+ ],
+ "Resource": "arn:aws:s3:::BUCKET_NAME"
+ \}
+ ]
+\}
+`}
+
+
+
+
+
+## Backup Retention Policy
+
+By default, backups are stored indefinitely. The backup retention policy sets
+a retention period, at the end of which backups are deleted. Deletion occurs
+during the next scheduled backup task after the end of the retention period.
+
+Full backups and their corresponding incremental backups are deleted together.
+Before a full backup can be deleted, all of its incremental backups must be older
+than the retention period as well.
+
+The retention policy is a property of `PeriodicBackupConfiguration`:
+
+
+
+{`public class RetentionPolicy
+\{
+ public bool Disabled \{ get; set; \}
+ public TimeSpan? MinimumBackupAgeToKeep \{ get; set; \}
+\}
+`}
+
+
+
+| Parameter | Type | Description |
+| - | - | - |
+| **Disabled** | `bool` | If set to `true`, backups will be retained indefinitely, and not deleted. Default: false |
+| **MinimumBackupAgeToKeep** | `TimeSpan` | The minimum amount of time to retain a backup. Once a backup is older than this time span, it will be deleted during the next scheduled backup task. |
+
+#### Example
+
+
+
+{`var config = new PeriodicBackupConfiguration
+\{
+ RetentionPolicy = new RetentionPolicy
+ \{
+ Disabled = false, // False is the default value
+ MinimumBackupAgeToKeep = TimeSpan.FromDays(100)
+ \}
+\};
+`}
+
+
+
+
+
+## Server-Wide Backup
+
+You can create a Server-Wide Backup task to back-up **all the databases in your cluster** at a scheduled time.
+Individual databases can be excluded from the backup. Learn more in [Studio: Server-Wide Backup](../../../../studio/server/server-wide-backup.mdx).
+
+Backups can be made locally, as well as to a [set of remote locations](../../../../client-api/operations/maintenance/backup/backup-overview.mdx#backup-to-local-and-remote-destinations).
+
+#### Examples
+
+A server-wide backup configuration that sets multiple destinations:
+
+
+
+{`var putConfiguration = new ServerWideBackupConfiguration
+\{
+ Disabled = true,
+ FullBackupFrequency = "0 2 * * 0",
+ IncrementalBackupFrequency = "0 2 * * 1",
+
+ //Backups are stored in this folder first, and sent from it to remote destinations (if defined).
+ LocalSettings = new LocalSettings
+ \{
+ FolderPath = "localFolderPath"
+ \},
+
+ //FTP settings
+ FtpSettings = new FtpSettings
+ \{
+ Url = "ftps://localhost/john/backups"
+ \},
+
+ //Microsoft Azure settings.
+ AzureSettings = new AzureSettings
+ \{
+ AccountKey = "Azure Account Key",
+ AccountName = "Azure Account Name",
+ RemoteFolderName = "john/backups"
+ \},
+
+ //Amazon S3 bucket settings.
+ S3Settings = new S3Settings
+ \{
+ AwsAccessKey = "Amazon S3 Access Key",
+ AwsSecretKey = "Amazon S3 Secret Key",
+ AwsRegionName = "Amazon S3 Region Name",
+ BucketName = "john-bucket",
+ RemoteFolderName = "john/backups"
+ \},
+
+ //Amazon Glacier settings.
+ GlacierSettings = new GlacierSettings
+ \{
+ AwsAccessKey = "Amazon Glacier Access Key",
+ AwsSecretKey = "Amazon Glacier Secret Key",
+ AwsRegionName = "Amazon Glacier Region Name",
+ VaultName = "john-glacier",
+ RemoteFolderName = "john/backups"
+ \},
+
+ //Google Cloud Backup settings
+ GoogleCloudSettings = new GoogleCloudSettings
+ \{
+ BucketName = "Google Cloud Bucket",
+ RemoteFolderName = "BackupFolder",
+ GoogleCredentialsJson = "GoogleCredentialsJson"
+ \}
+\};
+
+var result = await store.Maintenance.Server.SendAsync(new PutServerWideBackupConfigurationOperation(putConfiguration));
+var serverWideConfiguration = await store.Maintenance.Server.SendAsync(new GetServerWideBackupConfigurationOperation(result.Name));
+`}
+
+
+
+A server-wide backup configuration that excludes several databases:
+
+
+
+{`var DBExcludeConfiguration = new ServerWideBackupConfiguration
+\{
+ Disabled = true,
+ FullBackupFrequency = "0 2 * * 0",
+ LocalSettings = new LocalSettings
+ \{
+ FolderPath = "localFolderPath"
+ \},
+ ExcludedDatabases = new []
+ \{
+ "DB1",
+ "DB2",
+ "DB5",
+ "NorthWind",
+ "DB2_Jun_2018_Backup"
+ \}
+\};
+
+var result = await store.Maintenance.Server.SendAsync(new PutServerWideBackupConfigurationOperation(DBExcludeConfiguration));
+`}
+
+
+
+
+
+## Initiate Immediate Backup Execution
+
+The Backup task is [executed periodically](../../../../server/ongoing-tasks/backup-overview.mdx#backup--restore-overview) on its predefined schedule.
+If needed, it can also be executed immediately.
+
+* To execute an existing backup task immediately, use the `StartBackupOperation` method.
+
+
+{`// Create a new backup task
+var operation = new UpdatePeriodicBackupOperation(config);
+var result = await docStore.Maintenance.SendAsync(operation);
+
+// Run the backup task immediately
+await docStore.Maintenance.SendAsync(new StartBackupOperation(true, result.TaskId));
+`}
+
+
+
+ * Definition:
+
+
+{`public StartBackupOperation(bool isFullBackup, long taskId)
+`}
+
+
+
+ * Parameters:
+
+ | Parameter | Type | Functionality |
+ | ------ | ------ | ------ |
+ | isFullBackup | bool | true: full-backup false: incremental-backup |
+ | taskId | long | The existing backup task ID |
+
+
+* To verify the execution results, use the `GetPeriodicBackupStatusOperation` method.
+
+
+{`// Pass the the ongoing backup task ID to GetPeriodicBackupStatusOperation
+var backupStatus = new GetPeriodicBackupStatusOperation(result.TaskId);
+`}
+
+
+ * Return Value:
+ The **PeriodicBackupStatus** object returned from **GetPeriodicBackupStatusOperation** is filled with the previously configured backup parameters and with the execution results.
+
+
+{`public class PeriodicBackupStatus : IDatabaseTaskStatus
+\{
+ public long TaskId \{ get; set; \}
+ public BackupType BackupType \{ get; set; \}
+ public bool IsFull \{ get; set; \}
+ public string NodeTag \{ get; set; \}
+ public DateTime? LastFullBackup \{ get; set; \}
+ public DateTime? LastIncrementalBackup \{ get; set; \}
+ public DateTime? LastFullBackupInternal \{ get; set; \}
+ public DateTime? LastIncrementalBackupInternal \{ get; set; \}
+ public LocalBackup LocalBackup \{ get; set; \}
+ public UploadToS3 UploadToS3;
+ public UploadToGlacier UploadToGlacier;
+ public UploadToAzure UploadToAzure;
+ public UploadToFtp UploadToFtp;
+ public long? LastEtag \{ get; set; \}
+ public LastRaftIndex LastRaftIndex \{ get; set; \}
+ public string FolderName \{ get; set; \}
+ public long? DurationInMs \{ get; set; \}
+ public long Version \{ get; set; \}
+ public Error Error \{ get; set; \}
+ public long? LastOperationId \{ get; set; \}
+\}
+`}
+
+
+
+
+## Delay Backup Execution
+
+The execution of a periodic backup task can be **delayed** for a given time period
+via [Studio](../../../../studio/database/tasks/backup-task.mdx#delaying-a-running-backup-task)
+or using the `DelayBackupOperation` store operation.
+
+* Definition:
+
+
+{`public DelayBackupOperation(long runningBackupTaskId, TimeSpan duration)
+`}
+
+
+
+* Parameters:
+
+ | Parameter | Type | Functionality |
+ | ------ | ------ | ------ |
+ | runningBackupTaskId| `long` | Backup task ID |
+ | duration | `TimeSpan` | Delay Duration |
+
+* Example:
+ To delay the execution of a running backup task pass `DelayBackupOperation`
+ the task's ID and the delay duration.
+
+
+{`// Get backup operation info
+var taskBackupInfo = await docStore.Maintenance.SendAsync(
+ new GetOngoingTaskInfoOperation(taskId, OngoingTaskType.Backup)) as OngoingTaskBackup;
+
+// Set delay duration to 10 minutes from now
+var delayDuration = TimeSpan.FromMinutes(10);
+var delayUntil = DateTime.Now + delayDuration;
+
+// Delay backup operation
+await docStore.Maintenance.SendAsync(
+ new DelayBackupOperation(taskBackupInfo.OnGoingBackup.RunningBackupTaskId, delayDuration));
+`}
+
+
+
+
+
+## Recommended Precautions
+
+
+* **Don't substitute RavenDB's backup procedures with simply copying the database folder yourself**.
+ The official backup procedure satisfies needs that simply copying the database folder does not. E.g. -
+ * A reliable point-in-time freeze of backed up data.
+ * An ACIDity of backed-up data, to keep its independence during restoration.
+
+* **Remove old backup files regularly**.
+ Set the [backup retention policy](../../../../client-api/operations/maintenance/backup/backup-overview.mdx#backup-retention-policy)
+ to remove unneeded backup files so that they don't build up.
+ While setting how many days to keep your backups, consider how much of a recent database history you would like to have access to.
+
+* **Store backup files in a location other than your database's**.
+ Note that backup files are always stored in a local folder first (even when the final backup destination is remote).
+ Make sure that this local folder is not where your database is stored, as a precaution to keep vacant database storage space.
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/backup/encrypted-backup.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/backup/encrypted-backup.mdx
new file mode 100644
index 0000000000..46d3955f03
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/backup/encrypted-backup.mdx
@@ -0,0 +1,336 @@
+---
+title: "Backup Encryption"
+hide_table_of_contents: true
+sidebar_label: Encryption
+sidebar_position: 2
+---
+
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+# Backup Encryption
+
+
+* When a database is **encrypted**, you can generate the following backup types for it:
+ * An *encrypted Snapshot* (using the database encryption key)
+ * An *encrypted Logical-Backup* (using the database encryption key, or any key of your choice)
+ * An *un-encrypted Logical-Backup*
+
+* When a database is **not encrypted**, you can generate the following backup types for it:
+ * An *un-encrypted Snapshot*
+ * An *encrypted Logical-Backup* (providing an encryption key of your choice)
+ * An *un-encrypted* Logical-Backup
+
+* **Incremental backups** of encrypted logical-backups and snapshots are encrypted as well,
+ using the same encryption key provided for the full backup.
+
+* In this page:
+ * [RavenDB's Security Approach](../../../../client-api/operations/maintenance/backup/encrypted-backup.mdx#ravendb)
+ * [Secure Client-Server Communication](../../../../client-api/operations/maintenance/backup/encrypted-backup.mdx#secure-client-server-communication)
+ * [Database Encryption](../../../../client-api/operations/maintenance/backup/encrypted-backup.mdx#database-encryption)
+ * [Backup-Encryption Overview](../../../../client-api/operations/maintenance/backup/encrypted-backup.mdx#backup-encryption-overview)
+ * [Prerequisites to Encrypting Backups](../../../../client-api/operations/maintenance/backup/encrypted-backup.mdx#prerequisites-to-encrypting-backups)
+ * [Choosing Encryption Mode & Key](../../../../client-api/operations/maintenance/backup/encrypted-backup.mdx#choosing-encryption-mode--key)
+ * [Creating an Encrypted Logical-Backup](../../../../client-api/operations/maintenance/backup/encrypted-backup.mdx#creating-an-encrypted-logical-backup)
+ * [For a Non-Encrypted Database](../../../../client-api/operations/maintenance/backup/encrypted-backup.mdx#for-a-non-encrypted-database)
+ * [For an Encrypted Database](../../../../client-api/operations/maintenance/backup/encrypted-backup.mdx#for-an-encrypted-database)
+ * [Creating an Encrypted Snapshot](../../../../client-api/operations/maintenance/backup/encrypted-backup.mdx#creating-an-encrypted-snapshot)
+ * [Restoring an Encrypted Backup](../../../../client-api/operations/maintenance/backup/encrypted-backup.mdx#restoring-an-encrypted-backup)
+ * [Restoring an encrypted Logical-Backup](../../../../client-api/operations/maintenance/backup/encrypted-backup.mdx#restoring-an-encrypted-logical-backup)
+ * [Restoring a Snapshot](../../../../client-api/operations/maintenance/backup/encrypted-backup.mdx#restoring-a-snapshot)
+
+## RavenDB's Security Approach
+
+RavenDB's comprehensive security approach includes -
+
+* **Authentication** and **Certification**
+ to secure your data while it is **transferred between client and server**.
+* **Database Encryption**
+ to secure your data while **stored in the database**.
+* **Backup-Files Encryption**
+ to secure your data while **stored for safe-keeping**.
+#### Secure Client-Server Communication
+
+To prevent unauthorized access to your data during transfer, apply the following:
+
+* **Enable secure communication** in advance, during the server setup.
+ Secure communication can be enabled either [manually](../../../../server/security/authentication/certificate-configuration.mdx)
+ or [using the setup-wizard](../../../../start/installation/setup-wizard.mdx).
+* **Authenticate with the server**.
+ Secure communication requires clients to **certify themselves** in order to access RavenDB.
+ Client authentication code sample:
+
+
+{`
+// path to the certificate you received during the server setup
+var cert = new X509Certificate2(@"C:\\Users\\RavenDB\\authentication_key\\admin.client.certificate.RavenDBdom.pfx");
+
+using (var docStore = new DocumentStore
+\{
+ Urls = new[] \{ "https://a.RavenDBdom.development.run" \},
+ Database = "encryptedDatabase",
+ Certificate = cert
+\}.Initialize())
+\{
+ // Backup & Restore procedures here
+\}
+`}
+
+
+#### Database Encryption
+
+Secure the data stored on the server by
+[encrypting your database](../../../../server/security/encryption/database-encryption.mdx).
+
+* **Secure communication to enable database encryption.**
+ An encrypted database can only be created when the
+ [client-server communication is secure](../../../../client-api/operations/maintenance/backup/encrypted-backup.mdx#secure-client-server-communication).
+
+
+
+## Backup-Encryption Overview
+
+#### Prerequisites to Encrypting Backups
+
+* **Logical-Backup**
+ There are no prerequisites to encrypting a Logical-Backup.
+ An encrypted logical-backup can be generated for an **encrypted database** and
+ for a **non-encrypted database**.
+ The encryption key used to generate an encrypted logical-backup of an encrypted database
+ can be different than the original database encryption key.
+
+* **Snapshot**
+ A [snapshot](../../../../client-api/operations/maintenance/backup/backup-overview.mdx#snapshot) is an exact image of your database.
+ If the database is **not encrypted**, its snapshot wouldn't be either.
+ If the database is **encrypted**, its snapshot would also be encrypted using the database encryption key.
+ If you want your snapshot to be encrypted, simply take the snapshot of an
+ [encrypted database](../../../../server/security/encryption/database-encryption.mdx#creating-an-encrypted-database-using-the-rest-api-and-the-client-api).
+
+#### Choosing Encryption Mode & Key
+
+Use the same [Backup](../../../../client-api/operations/maintenance/backup/backup-overview.mdx#backup) and [Restore](../../../../client-api/operations/maintenance/backup/restore.mdx) methods that are used to create and restore **un**-encrypted backups.
+Specify whether encryption is used, and with which encryption key,
+in the **BackupEncryptionSettings** structure defined within the
+[PeriodicBackupConfiguration](../../../../client-api/operations/maintenance/backup/backup-overview.mdx#backup-to-local-and-remote-destinations) object.
+
+`BackupEncryptionSettings` definition:
+
+
+
+{`public class BackupEncryptionSettings
+\{
+ public EncryptionMode EncryptionMode \{ get; set; \}
+ public string Key \{ get; set; \}
+
+ public BackupEncryptionSettings()
+ \{
+ Key = null;
+ EncryptionMode = EncryptionMode.None;
+ \}
+\}
+`}
+
+
+
+`BackupEncryptionSettings` properties:
+
+| Property | Type | Functionality |
+|--------------------|--------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **EncryptionMode** | enum | Set the encryption mode. `None` - Use **no encryption** (default mode). `UseDatabaseKey` - Use **the same key the DB is encrypted with** (Logical-Backups & Snapshots). `UseProvidedKey` - Provide **your own encryption key** (Logical-Backups only). |
+| **Key** | string | Pass **your own encryption key** using this parameter (Logical-Backup only).
// Use an encryption key of your choice EncryptionMode = EncryptionMode.UseProvidedKey, Key = "OI7Vll7DroXdUORtc6Uo64wdAk1W0Db9ExXXgcg5IUs="
**Note**: When Key is provided and `EncryptionMode` is set to `useDatabaseKey`, the **database key** is used (and not the provided key). |
+
+`EncryptionMode` definition:
+
+
+
+{`public enum EncryptionMode
+\{
+ None,
+ UseDatabaseKey,
+ UseProvidedKey
+\}
+`}
+
+
+
+## Creating an Encrypted Logical-Backup
+
+An encrypted logical-backup can be created for both **encrypted** and **non-encrypted** databases.
+#### For a Non-Encrypted Database
+
+1. To create a **non-encrypted logical-backup** -
+ **Set** `EncryptionMode = EncryptionMode.None`
+ Or
+ **Don't set** EncryptionMode & Key at all - Default value is: `EncryptionMode.None`
+
+2. To create an **encrypted logical-backup**, set:
+
+
+{`EncryptionMode = EncryptionMode.UseProvidedKey,
+Key = "a_key_of_your_choice"
+`}
+
+
+#### For an Encrypted Database
+
+1. To create a non-encrypted logical-backup -
+ Set `EncryptionMode = EncryptionMode.None`
+
+2. To create an encrypted logical-backup using the database key:
+ **Set** `EncryptionMode = EncryptionMode.UseDatabaseKey`
+ Or
+ **Don't set** EncryptionMode & Key at all - Default value is: `EncryptionMode.UseDatabaseKey`
+
+
+{`//Encrypting a logical-backup using the database encryption key
+var config = new PeriodicBackupConfiguration
+\{
+ //Additional settings here..
+ //..
+
+ //Set backup type to logical-backup
+ BackupType = BackupType.Backup,
+
+ BackupEncryptionSettings = new BackupEncryptionSettings
+ \{
+ //Use the same encryption key as the database
+ EncryptionMode = EncryptionMode.UseDatabaseKey
+ \}
+\};
+var operation = new UpdatePeriodicBackupOperation(config);
+var result = await docStore.Maintenance.SendAsync(operation);
+`}
+
+
+
+3. To create an encrypted logical-backup using your own key, set:
+
+
+{`EncryptionMode = EncryptionMode.UseProvidedKey,
+Key = "a_key_of_your_choice"
+`}
+
+
+
+
+
+## Creating an Encrypted Snapshot
+
+An encrypted Snapshot can only be created for an encrypted database.
+
+* To create a **Non-Encrypted Snapshot** (for a non-encrypted database) -
+ **Set** `EncryptionMode = EncryptionMode.None`
+ Or
+ **Don't set** EncryptionMode & Key at all - Default value is: `EncryptionMode.None`
+
+* To create an **Encrypted Snapshot** (For an encrypted database) -
+ **Set** `EncryptionMode = EncryptionMode.UseDatabaseKey`
+ Or
+ **Don't set** EncryptionMode & Key at all - Default value is: `EncryptionMode.UseDatabaseKey`
+
+
+{`var config = new PeriodicBackupConfiguration
+\{
+ //Additional settings here..
+ //..
+
+ //Set backup type to snapshot.
+ //If the database is encrypted, its snapshot will be encrypted as well.
+ BackupType = BackupType.Snapshot,
+
+ BackupEncryptionSettings = new BackupEncryptionSettings
+ \{
+ //To encrypt a snapshot, EncryptionMode must be set to EncryptionMode.UseDatabaseKey.
+ //Setting it to other values will generate an InvalidOperationException.
+ EncryptionMode = EncryptionMode.UseDatabaseKey
+ \}
+\};
+var operation = new UpdatePeriodicBackupOperation(config);
+var result = await docStore.Maintenance.SendAsync(operation);
+`}
+
+
+
+
+
+## Restoring an Encrypted Backup
+
+To [restore](../../../../client-api/operations/maintenance/backup/restore.mdx#configuration-and-execution)
+an encrypted backup you must provide the **key** that was used to encrypt it.
+Pass the key to `RestoreBackupOperation` via `restoreConfiguration.BackupEncryptionSettings`.
+
+
+{`// restore encrypted database
+
+var restoreConfiguration = new RestoreBackupConfiguration();
+
+//New database name
+restoreConfiguration.DatabaseName = "newEncryptedDatabase";
+
+//Backup-file location
+var backupPath = @"C:\\Users\\RavenDB\\2019-01-06-11-11.ravendb-encryptedDatabase-A-snapshot";
+restoreConfiguration.BackupLocation = backupPath;
+
+restoreConfiguration.BackupEncryptionSettings = new BackupEncryptionSettings
+\{
+ Key = "OI7Vll7DroXdUORtc6Uo64wdAk1W0Db9ExXXgcg5IUs="
+\};
+
+var restoreBackupTask = new RestoreBackupOperation(restoreConfiguration);
+docStore.Maintenance.Server.Send(restoreBackupTask);
+`}
+
+
+#### Restoring an encrypted Logical-Backup
+
+A database is [restored](../../../../client-api/operations/maintenance/backup/encrypted-backup.mdx#restoring-an-encrypted-backup) from a logical-backup
+to its **unencrypted** form.
+To restore a database and encrypt its contents, you have to address it explicitly.
+
+* **To encrypt the restored database**:
+ To encrypt the database, pass `RestoreBackupOperation` an encryption key via `restoreConfiguration.EncryptionKey`.
+ Note: This key can be different than the key that was used to encrypt the logical-backup.
+
+
+{`//Restore the database using the key you encrypted it with
+restoreConfiguration.BackupEncryptionSettings = new BackupEncryptionSettings
+\{
+ Key = "OI7Vll7DroXdUORtc6Uo64wdAk1W0Db9ExXXgcg5IUs="
+\};
+
+//Encrypt the restored database using this key
+restoreConfiguration.EncryptionKey = "1F0K2R/KkcwbkK7n4kYlv5eqisy/pMnSuJvZ2sJ/EKo=";
+
+var restoreBackupTask = new RestoreBackupOperation(restoreConfiguration);
+docStore.Maintenance.Server.Send(restoreBackupTask);
+`}
+
+
+
+* To restore an **unencrypted** logical-backup:
+ Either provide **no encryption key** to activate the default value (`EncryptionMode.None`), or -
+ Set `EncryptionMode.None` Explicitly.
+
+
+{`restoreConfiguration.BackupEncryptionSettings = new BackupEncryptionSettings
+\{
+ //No encryption
+ EncryptionMode = EncryptionMode.None
+\};
+`}
+
+
+#### Restoring a Snapshot
+
+Restore a snapshot as specified in [Restoring an Encrypted Database](../../../../client-api/operations/maintenance/backup/encrypted-backup.mdx#restoring-an-encrypted-backup).
+
+* The database of an un-encrypted snapshot is restored to its un-encrypted form.
+* The database of an encrypted snapshot is restored to its encrypted form.
+ You must provide the database key that was used to encrypt the snapshot.
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/backup/faq.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/backup/faq.mdx
new file mode 100644
index 0000000000..22f0e61b61
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/backup/faq.mdx
@@ -0,0 +1,113 @@
+---
+title: "Backup & Restore: Frequently Asked Questions"
+hide_table_of_contents: true
+sidebar_label: FAQ
+sidebar_position: 3
+---
+
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+# Backup & Restore: Frequently Asked Questions
+
+
+* In this page:
+ * [Is there a one-time backup?](../../../../client-api/operations/maintenance/backup/faq.mdx#is-there-a-one-time-backup)
+ * [How do I create a backup of my cluster, not just one database?](../../../../client-api/operations/maintenance/backup/faq.mdx#how-do-i-create-a-backup-of-my-cluster-not-just-one-database)
+ * [How should the servers' time be set in a multi-node cluster?](../../../../client-api/operations/maintenance/backup/faq.mdx#how-should-the-servers-time-be-set-in-a-multi-node-cluster)
+ * [Is an External Replication a good substitute for a backup task?](../../../../client-api/operations/maintenance/backup/faq.mdx#is-an-external-replication-task-a-good-substitute-for-a-backup-task)
+ * [Can I simply copy the database folder contents whenever I need to create a backup?](../../../../client-api/operations/maintenance/backup/faq.mdx#can-i-simply-copy-the-database-folder-contents-whenever-i-need-to-create-a-backup)
+ * [Does RavenDB automatically delete old backups?](../../../../client-api/operations/maintenance/backup/faq.mdx#does-ravendb-automatically-delete-old-backups)
+ * [Are there any locations that backup files should NOT be stored at?](../../../../client-api/operations/maintenance/backup/faq.mdx#are-there-any-locations-that-backup-files-should-not-be-stored-at)
+ * [What happens when a backup process fails before it is completed?](../../../../client-api/operations/maintenance/backup/faq.mdx#what-happens-when-a-backup-process-fails-before-completion)
+
+
+## FAQ
+
+### Is there a one-time backup?
+
+Yes. Although [backup is a vital ongoing task](../../../../studio/database/tasks/backup-task.mdx#periodic-backup-creation) and is meant to back your data up continuously,
+you can also use [one-time manual backups](../../../../studio/database/tasks/backup-task.mdx#manually-creating-one-time-backups)
+(e.g. before upgrading or other maintenance).
+
+* You can also use [Smuggler](../../../../client-api/smuggler/what-is-smuggler.mdx#what-is-smuggler) as an equivalent of a full backup for a single [export](../../../../client-api/smuggler/what-is-smuggler.mdx#export) operation.
+### How do I create a backup of my cluster, not just one database?
+
+You can run a [server-wide ongoing backup](../../../../studio/server/server-wide-backup.mdx)
+which backs up each of the databases in your cluster.
+What does it back up? Both binary "Snapshot" and json "Backup" types of backup tasks
+save the entire [database record](../../../../studio/database/settings/database-record.mdx) including:
+
+* Database contents
+* Document extensions (attachments, counters, and time-series)
+* Indexes (json Backup saves only the index definitions, while Snapshot saves fully built indexes)
+* Revisions
+* Conflict configurations
+* Identities
+* Compare-exchange items
+* Ongoing tasks (Ongoing backup, ETL, Subscription, and Replication tasks)
+
+**Cluster configuration and nodes setup** can be [re-created](../../../../start/getting-started.mdx#installation--setup)
+and databases can be [restored from backup](../../../../studio/database/create-new-database/from-backup.mdx).
+
+**To prevent downtime while rebuilding**, you can [replicate your database](../../../../studio/database/tasks/ongoing-tasks/hub-sink-replication/overview.mdx)
+so that there is a live version available to distribute the workload and act as a failover.
+[Is an External Replication a good substitute for a backup task?](../../../../client-api/operations/maintenance/backup/faq.mdx#is-an-external-replication-task-a-good-substitute-for-a-backup-task)
+### How should the servers' time be set in a multi-node cluster?
+
+The backup task runs on schedule according to the executing server's local time.
+It is recommended that you set all nodes to the same time. This way, backup files'
+time-signatures are consistent even when the backups are created by different nodes.
+### Is an External Replication task a good substitute for a backup task?
+
+Although [External Replication](../../../../studio/database/tasks/ongoing-tasks/external-replication-task.mdx)
+and [Backup](../../../../client-api/operations/maintenance/backup/backup-overview.mdx)
+are both ongoing tasks that create a copy of your data, they have different aims and behavior.
+
+For example, replication tasks don't allow you to retrieve data from a history/restore point after mistakes,
+but they do create a live copy that can be used as a failover and they can distribute the workload.
+See [Backup Task -vs- External Replication Task](../../../../studio/database/tasks/backup-task.mdx#backup-task--vs--replication-task).
+### Can I simply copy the database folder contents whenever I need to create a backup?
+
+Simply copying the database folder of a live database will probably create corrupted data in the backup.
+Creating an [ongoing backup task](../../../../client-api/operations/maintenance/backup/backup-overview.mdx) is a one-time operation.
+There really is no reason to do it manually again and again.
+There really is no reason to do it manually again and again. Properly backing up provides:
+
+* **Up-to-date backups** by incrementally and frequently updating changes in the data.
+* **The creation of a reliable point-in-time freeze** of backed-up data that can be used in case of mistaken deletes or patches.
+* **The assurance of ACID compliance** for backed up data during interactions with the file system.
+### Does RavenDB automatically delete old backups?
+
+You can configure RavenDB to delete old backups with the `RetentionPolicy` feature.
+If you enable it, RavenDB will delete backups after the `TimeSpan` that you set.
+By default, `RetentionPolicy` is disabled.
+
+Learn how to change the [Retention Policy via the RavenDB Studio](../../../../studio/database/tasks/backup-task.mdx#retention-policy).
+Learn how to change the [Retention Policy via API](../../../../client-api/operations/maintenance/backup/backup-overview.mdx#backup-retention-policy).
+### Are there any locations that backup files should NOT be stored at?
+
+It is recommended **not to store backups on the same drive as your database** data files,
+since both the database and the backups would be exposed to the same risks.
+
+* Disk space can run low as backups start piling up unless you [set your retention policy for backups](../../../../client-api/operations/maintenance/backup/faq.mdx#does-ravendb-automatically-delete-old-backups).
+* There are many [options for backup locations](../../../../studio/database/tasks/backup-task.mdx#destination).
+* We recommend creating ongoing backups in two different types of locations (cloud and local machine).
+ You can store your backups in multiple locations by setting up one [on-going backup task](../../../../studio/database/tasks/backup-task.mdx)
+ with multiple destinations.
+### What happens when a backup process fails before completion?
+
+While in progress, the backup content is written to an **.in-progress* file on disk.
+
+* Once **backup is complete**, the file is renamed to its correct final name.
+* If the backup process **fails before completion**, the **.in-progress* file remains on disk.
+ This file will not be used in any future Restore processes.
+ If the failed process was an incremental-backup task, any future incremental backups will
+ continue from the correct place before the file was created so that the backup is consistent with the source.
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/backup/restore.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/backup/restore.mdx
new file mode 100644
index 0000000000..c241753cf1
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/backup/restore.mdx
@@ -0,0 +1,230 @@
+---
+title: "Restore"
+hide_table_of_contents: true
+sidebar_label: Restore
+sidebar_position: 1
+---
+
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+# Restore
+
+
+* A backed-up database can be restored to a new database, either
+ by client API methods or through the Studio.
+
+* On a [sharded](../../../../sharding/overview.mdx) database, restore
+ is performed per shard, using the backups created by the shards.
+ Read about restore on a sharded database [in the section dedicated to it](../../../../sharding/backup-and-restore/restore.mdx).
+
+* In this page:
+ * [Restoring a Database: Configuration and Execution](../../../../client-api/operations/maintenance/backup/restore.mdx#restoring-a-database:-configuration-and-execution)
+ * [Optional Settings](../../../../client-api/operations/maintenance/backup/restore.mdx#optional-settings)
+ * [Restore Database to a Single Node](../../../../client-api/operations/maintenance/backup/restore.mdx#restore-database-to-a-single-node)
+ * [Restore Database to Multiple Nodes](../../../../client-api/operations/maintenance/backup/restore.mdx#restore-database-to-multiple-nodes)
+ * [Restore to a Single Node & Replicate to Other Nodes](../../../../client-api/operations/maintenance/backup/restore.mdx#restore-database-to-a-single-node--replicate-it-to-other-nodes)
+ * [Restore to Multiple Nodes Simultaneously](../../../../client-api/operations/maintenance/backup/restore.mdx#restore-database-to-multiple-nodes-simultaneously)
+ * [Recommended Precautions](../../../../client-api/operations/maintenance/backup/restore.mdx#recommended-precautions)
+
+
+## Restoring a Database: Configuration and Execution
+
+To restore a database, set a `RestoreBackupConfiguration` instance and pass
+it to `RestoreBackupOperation` for execution.
+### `RestoreBackupOperation`
+
+
+{`public RestoreBackupOperation(RestoreBackupConfiguration restoreConfiguration)
+`}
+
+
+### `RestoreBackupConfiguration`
+
+
+{`public class RestoreBackupConfiguration
+\{
+ public string DatabaseName \{ get; set; \}
+ public string BackupLocation \{ get; set; \}
+ public string LastFileNameToRestore \{ get; set; \}
+ public string DataDirectory \{ get; set; \}
+ public string EncryptionKey \{ get; set; \}
+ public bool DisableOngoingTasks \{ get; set; \}
+ public bool SkipIndexes \{ get; set; \}
+\}
+`}
+
+
+
+* Parameters:
+
+ | Parameter | Value | Functionality |
+ | ------------- | ------------- | ----- |
+ | **DatabaseName** | string | Name for the new database. |
+ | **BackupLocation** | string | Local path of the backup file to be restored. Path **must be local** for the restoration to continue.|
+ | **LastFileNameToRestore** (Optional - omit for default) | string | [Last incremental backup file](../../../../server/ongoing-tasks/backup-overview.mdx#restoration-procedure) to restore. **Default behavior: Restore all backup files in the folder.** |
+ | **DataDirectory** (Optional - omit for default) | string | The new database data directory. **Default folder: Under the "Databases" folder, in a folder that carries the restored database's name.** |
+ | **EncryptionKey** (Optional - omit for default) | string | A key for an encrypted database. **Default behavior: Try to restore as if DB is unencrypted.**|
+ | **DisableOngoingTasks** (Optional - omit for default) | boolean | `true` - disable ongoing tasks when Restore is complete. `false` - enable ongoing tasks when Restore is complete. **Default: `false` (Ongoing tasks will run when Restore is complete).**|
+ | **SkipIndexes** (Optional - omit for default) | boolean | `true` to disable indexes import, `false` to enable indexes import. **Default: `false` restore all indexes.**|
+
+
+ * Verify that RavenDB has full access to the backup-files and database folders.
+ * Make sure your server has permissions to read from `BackupLocation` and write to `DataDirectory`.
+
+
+
+
+## Optional Settings
+
+### `LastFileNameToRestore`
+Restore incremental backup files up to (including) the selected file, and stop restoring there.
+
+* E.g. -
+ * These are the files in your backup folder:
+ 2018-12-26-09-00.ravendb-full-backup
+ 2018-12-26-12-00.ravendb-incremental-backup
+ 2018-12-26-15-00.ravendb-incremental-backup
+ 2018-12-26-18-00.ravendb-incremental-backup
+ * Feed **LastFileNameToRestore** with the 2018-12-26-12-00 incremental-backup file name:
+
+
+{`//Last incremental backup file to restore from
+restoreConfiguration.LastFileNameToRestore = @"2018-12-26-12-00.ravendb-incremental-backup";
+`}
+
+
+ * The full-backup and 12:00 incremental-backup files **will** be restored.
+ The 15:00 and 18:00 files will **not** be restored.
+### `DataDirectory`
+
+Specify the directory into which the database will be restored.
+
+
+{`// Restore to the specified directory path
+var dataPath = @"C:\\Users\\RavenDB\\backups\\2018-12-26-16-17.ravendb-Products-A-backup\\restoredDatabaseLocation";
+restoreConfiguration.DataDirectory = dataPath;
+`}
+
+
+### `EncryptionKey`
+
+This is where you need to provide your encryption key if your backup is encrypted.
+
+
+{`restoreConfiguration.EncryptionKey = "your_encryption_key";
+`}
+
+
+### `DisableOngoingTasks`
+
+set **DisableOngoingTasks** to **true** to disable the execution of ongoing tasks after restoration.
+See [Recommended Precautions](../../../../client-api/operations/maintenance/backup/restore.mdx#recommended-precautions).
+
+
+{`// Do or do not run ongoing tasks after restoration.
+// Default setting is FALSE, to allow tasks' execution when the backup is restored.
+restoreConfiguration.DisableOngoingTasks = true;
+`}
+
+
+
+
+
+
+## Restore Database to a Single Node
+
+* **Configuration**
+ * Set `DatabaseName` with the **new database name**.
+ * Set `BackupLocation` with a **local path for the backup files**.
+
+* **Execution**
+ * Pass the configured `RestoreBackupConfiguration` to `RestoreBackupOperation`.
+ * Send the restore-backup operation to the server to start the restoration execution.
+
+* **Code Sample**:
+
+
+{`var restoreConfiguration = new RestoreBackupConfiguration();
+
+// New database name
+restoreConfiguration.DatabaseName = "newProductsDatabase";
+
+// Local path with a backup file
+var backupPath = @"C:\\Users\\RavenDB\\backups\\2018-12-26-16-17.ravendb-Products-A-backup";
+restoreConfiguration.BackupLocation = backupPath;
+
+var restoreBackupTask = new RestoreBackupOperation(restoreConfiguration);
+docStore.Maintenance.Server.Send(restoreBackupTask);
+`}
+
+
+
+
+
+## Restore Database to Multiple Nodes
+
+### Restore Database to a Single Node & Replicate it to Other Nodes
+
+The common approach to restoring a database that should reside on multiple nodes, is to restore the backed-up
+database to a single server and then expand the database group to additional nodes, allowing normal replication.
+
+* Verify relevant nodes exist in your cluster. [Add nodes](../../../../server/clustering/cluster-api.mdx#add-node-to-the-cluster) as needed.
+* Manage the database-group topology.
+ Add a node to the database-group using the [Studio](../../../../studio/database/settings/manage-database-group.mdx)
+ or from your [code](../../../../client-api/operations/server-wide/add-database-node.mdx), to replicate the database to the other nodes.
+### Restore Database to Multiple Nodes Simultaneously
+
+You can create the cluster in advance, and restore the database to multiple nodes simultaneously.
+
+
+
+* When a [logical-backup](../../../../client-api/operations/maintenance/backup/backup-overview.mdx#logical-backup)
+ is restored, each document receives a new change-vector according to the node it resides on.
+ When the database instances synchronize, this change-vector will be updated and be composed of all database nodes tags.
+
+* When a [snapshot](../../../../client-api/operations/maintenance/backup/backup-overview.mdx#snapshot) is restored,
+ documents are **not** assigned a new change-vector because the databases kept by all nodes are considered identical.
+ Each document retains the original change-vector it had during backup.
+ When the database instances synchronize, documents' change-vectors do **not** change.
+
+
+
+* On the first node, restore the database using its original name.
+* On other nodes, restore the database using different names.
+* Wait for the restoration to complete on all nodes.
+* **Soft-delete** the additional databases (those with altered names) from the cluster.
+ [Soft-delete](../../../../client-api/operations/server-wide/delete-database.mdx#operations--server--how-to-delete-a-database)
+ the databases by setting `HardDelete` to `false`, to retain the data files on disk.
+* Rename the database folder on all nodes to the original database name.
+* [Expand](../../../../server/clustering/rachis/cluster-topology.mdx#modifying-the-topology) the database group to all relevant nodes.
+
+
+
+## Recommended Precautions
+
+
+When restoring a backed-up database, you may be interested only in the restored data
+and not in any ongoing tasks that may have existed during backup.
+
+* E.g., an ETL ongoing task from a production cluster may have unwanted results in a testing environment.
+
+In such cases, **disable** ongoing tasks using the [DisableOngoingTasks](../../../../client-api/operations/maintenance/backup/restore.mdx#section-1) flag.
+
+* Code Sample:
+
+
+{`// Do or do not run ongoing tasks after restoration.
+// Default setting is FALSE, to allow tasks' execution when the backup is restored.
+restoreConfiguration.DisableOngoingTasks = true;
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/clean-change-vector.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/clean-change-vector.mdx
new file mode 100644
index 0000000000..46611cf113
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/clean-change-vector.mdx
@@ -0,0 +1,90 @@
+---
+title: "Clean Change Vector"
+hide_table_of_contents: true
+sidebar_label: Clean Change Vector
+sidebar_position: 1
+---
+
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+# Clean Change Vector
+
+
+* A database's [change vector](../../../server/clustering/replication/change-vector.mdx) contains entries from each instance of the database
+in the database group. However, even when an instance no longer exists (because it was removed or replaced) its entry will remain in the
+database change vector. These entries can build up over time, leading to longer change vectors that take up unnecessary space.
+
+* **`UpdateUnusedDatabasesOperation`** lets you specify the IDs of database instances that no longer exist so that their entries can be
+removed from the database change vector.
+
+* This operation does not affect any documents' _current_ change vectors, but from now on when documents are modified or created their
+change vector will not include the obsolete entries.
+
+
+## Syntax
+
+
+
+{`public UpdateUnusedDatabasesOperation(string database, HashSet unusedDatabaseIds)
+`}
+
+
+
+| Parameter | Type | Description |
+| ------------- | ----- | ---- |
+| **database** | `string` | Name of the database |
+| **unusedDatabaseIds** | `HashSet` | The database IDs to be removed from the change vector |
+
+
+
+## Example
+
+In the 'General Stats' view in the [management studio](../../../studio/overview.mdx), you can see your database's current change vector (it's
+the same as the change vector of the database's most recently updated/created document).
+
+Below we see the change vector of an [example database](../../../start/about-examples.mdx) "NorthWind". It includes three entries: one of the
+NorthWind instance currently housed on cluster node A (whose ID begins with `N79J...`), and two of instances that were also previously
+housed on node A but which no longer exist.
+
+
+
+This code removes the obsolete entries specified by their database instance IDs:
+
+
+
+
+{`documentStore.Maintenance.Server.Send(
+ new UpdateUnusedDatabasesOperation(documentStore.Database, new HashSet
+{
+ "0N64iiIdYUKcO+yq1V0cPA",
+ "xwmnvG1KBkSNXfl7/0yJ1A"
+}));
+`}
+
+
+
+
+{`await documentStore.Maintenance.Server.SendAsync(
+ new UpdateUnusedDatabasesOperation(documentStore.Database, new HashSet
+{
+ "0N64iiIdYUKcO+yq1V0cPA",
+ "xwmnvG1KBkSNXfl7/0yJ1A"
+}));
+`}
+
+
+
+
+
+
+Next time a document is modified, you will see that the database change vector has been cleaned.
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/_category_.json b/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/_category_.json
new file mode 100644
index 0000000000..9986c022fe
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/_category_.json
@@ -0,0 +1,4 @@
+{
+ "position": 2,
+ "label": Configuration,
+}
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/_database-settings-operation-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/_database-settings-operation-csharp.mdx
new file mode 100644
index 0000000000..fe295f9187
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/_database-settings-operation-csharp.mdx
@@ -0,0 +1,182 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* The default database configuration settings can be customized:
+
+ * From the Client API - as described in this article.
+
+ * From Studio - via the [Database Settings](../../../../studio/database/settings/database-settings.mdx#database-settings) view.
+
+* In this page:
+
+ * [Put database settings operation](../../../../client-api/operations/maintenance/configuration/database-settings-operation.mdx#put-database-settings-operation)
+
+ * [Get database settings operation](../../../../client-api/operations/maintenance/configuration/database-settings-operation.mdx#get-database-settings-operation)
+
+
+Do not modify the database settings unless you are an expert and know what you're doing.
+
+
+
+
+## Put database settings operation
+
+* Use `PutDatabaseSettingsOperation` to modify the default database configuration.
+
+* Only **database-level** settings can be customized using this operation.
+ See the [Configuration overview](../../../../server/configuration/configuration-options.mdx) article to learn how to customize the **server-level** settings.
+
+* Note: for the changes to take effect, the database must be **reloaded**.
+ Reloading is accomplished by disabling and enabling the database using [ToggleDatabasesStateOperation](../../../../client-api/operations/server-wide/toggle-databases-state.mdx).
+ See the following example:
+
+
+
+
+{`// 1. Modify the database settings:
+// ================================
+
+// Define the settings dictionary with the key-value pairs to set, for example:
+var settings = new Dictionary
+{
+ ["Databases.QueryTimeoutInSec"] = "350",
+ ["Indexing.Static.DeploymentMode"] = "Rolling"
+};
+
+// Define the put database settings operation,
+// specify the database name & pass the settings dictionary
+var putDatabaseSettingsOp = new PutDatabaseSettingsOperation(documentStore.Database, settings);
+
+// Execute the operation by passing it to Maintenance.Send
+documentStore.Maintenance.Send(putDatabaseSettingsOp);
+
+// 2. RELOAD the database for the change to take effect:
+// =====================================================
+
+// Disable database
+var disableDatabaseOp = new ToggleDatabasesStateOperation(documentStore.Database, true);
+documentStore.Maintenance.Server.Send(disableDatabaseOp);
+
+// Enable database
+var enableDatabaseOp = new ToggleDatabasesStateOperation(documentStore.Database, false);
+documentStore.Maintenance.Server.Send(enableDatabaseOp);
+`}
+
+
+
+
+{`// 1. Modify the database settings:
+// ================================
+
+// Define the settings dictionary with the key-value pairs to set, for example:
+var settings = new Dictionary
+{
+ ["Databases.QueryTimeoutInSec"] = "350",
+ ["Indexing.Static.DeploymentMode"] = "Rolling"
+};
+
+// Define the put database settings operation,
+// specify the database name & pass the settings dictionary
+var putDatabaseSettingsOp = new PutDatabaseSettingsOperation(documentStore.Database, settings);
+
+// Execute the operation by passing it to Maintenance.SendAsync
+await documentStore.Maintenance.SendAsync(putDatabaseSettingsOp);
+
+// 2. RELOAD the database for the change to take effect:
+// =====================================================
+
+// Disable database
+var disableDatabaseOp = new ToggleDatabasesStateOperation(documentStore.Database, true);
+await documentStore.Maintenance.Server.SendAsync(disableDatabaseOp);
+
+// Enable database
+var enableDatabaseOp = new ToggleDatabasesStateOperation(documentStore.Database, false);
+await documentStore.Maintenance.Server.SendAsync(enableDatabaseOp);
+`}
+
+
+
+**Syntax**:
+
+
+
+{`PutDatabaseSettingsOperation(string databaseName, Dictionary configurationSettings)
+`}
+
+
+
+| Parameter | Type | Description |
+|-----------------------|------------------------------|----------------------------------------------------|
+| databaseName | `string` | Name of database for which to change the settings. |
+| configurationSettings | `Dictionary` | The configuration settings to set. |
+
+
+
+
+## Get database settings operation
+
+* Use `GetDatabaseSettingsOperation` to get the configuration settings that were customized for the database.
+
+* Only settings that have been changed will be retrieved.
+
+
+
+
+{`// Define the get database settings operation, specify the database name
+var getDatabaseSettingsOp = new GetDatabaseSettingsOperation(documentStore.Database);
+
+// Execute the operation by passing it to Maintenance.Send
+var customizedSettings = documentStore.Maintenance.Send(getDatabaseSettingsOp);
+
+// Get the customized value
+var customizedValue = customizedSettings.Settings["Databases.QueryTimeoutInSec"];
+`}
+
+
+
+
+{`// Define the get database settings operation, specify the database name
+var getDatabaseSettingsOp = new GetDatabaseSettingsOperation(documentStore.Database);
+
+// Execute the operation by passing it to Maintenance.SendAsync
+var customizedSettings = await documentStore.Maintenance.SendAsync(getDatabaseSettingsOp);
+
+// Get the customized value
+var customizedValue = customizedSettings.Settings["Databases.QueryTimeoutInSec"];
+`}
+
+
+
+**Syntax**:
+
+
+
+{`GetDatabaseSettingsOperation(string databaseName)
+`}
+
+
+
+| Parameter | Type | Description |
+|--------------|----------|-------------------------------------------------------------|
+| databaseName | `string` | The database name for which to get the customized settings. |
+
+
+
+
+{`// Executing the operation returns the following object:
+public class DatabaseSettings
+\{
+ // Configuration settings that have been customized
+ public Dictionary Settings \{ get; set; \}
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/_database-settings-operation-nodejs.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/_database-settings-operation-nodejs.mdx
new file mode 100644
index 0000000000..f4faebeeac
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/_database-settings-operation-nodejs.mdx
@@ -0,0 +1,130 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* The default database configuration settings can be customized:
+
+ * From the Client API - as described in this article.
+
+ * From Studio - via the [Database Settings](../../../../studio/database/settings/database-settings.mdx#database-settings) view.
+
+* In this page:
+
+ * [Put database settings operation](../../../../client-api/operations/maintenance/configuration/database-settings-operation.mdx#put-database-settings-operation)
+
+ * [Get database settings operation](../../../../client-api/operations/maintenance/configuration/database-settings-operation.mdx#get-database-settings-operation)
+
+
+Do not modify the database settings unless you are an expert and know what you're doing.
+
+
+
+
+## Put database settings operation
+
+* Use `PutDatabaseSettingsOperation` to modify the default database configuration.
+
+* Only **database-level** settings can be customized using this operation.
+ See the [Configuration overview](../../../../server/configuration/configuration-options.mdx) article to learn how to customize the **server-level** settings.
+
+* Note: for the changes to take effect, the database must be **reloaded**.
+ Reloading is accomplished by disabling and enabling the database using [ToggleDatabasesStateOperation](../../../../client-api/operations/server-wide/toggle-databases-state.mdx).
+ See the following example:
+
+
+
+{`// 1. Modify the database settings:
+// ================================
+
+// Define a settings object with key-value pairs to set, for example:
+const settings = \{
+ "Databases.QueryTimeoutInSec": "350",
+ "Indexing.Static.DeploymentMode": "Rolling"
+\};
+
+// Define the put database settings operation,
+// specify the database name & pass the settings dictionary
+const putDatabaseSettingsOp = new PutDatabaseSettingsOperation(documentStore.database, settings)
+
+// Execute the operation by passing it to maintenance.send
+await documentStore.maintenance.send(putDatabaseSettingsOp);
+
+// 2. RELOAD the database for the change to take effect:
+// =====================================================
+
+// Disable database
+const disableDatabaseOp = new ToggleDatabasesStateOperation(documentStore.database, true);
+await documentStore.maintenance.server.send(disableDatabaseOp);
+
+// Enable database
+const enableDatabaseOp = new ToggleDatabasesStateOperation(documentStore.database, false);
+await documentStore.maintenance.server.send(enableDatabaseOp);
+`}
+
+
+**Syntax**:
+
+
+
+{`const putDatabaseSettingsOp = new PutDatabaseSettingsOperation(databaseName, configurationSettings)
+`}
+
+
+
+| Parameter | Type | Description |
+|-----------------------|-----------|----------------------------------------------------|
+| databaseName | `string` | Name of database for which to change the settings. |
+| configurationSettings | `object` | The configuration settings to set. |
+
+
+
+
+## Get database settings operation
+
+* Use `GetDatabaseSettingsOperation` to get the configuration settings that were customized for the database.
+
+* Only settings that have been changed will be retrieved.
+
+
+
+{`// Define the get database settings operation, specify the database name
+const getDatabaseSettingsOp = new GetDatabaseSettingsOperation(documentStore.database);
+
+// Execute the operation by passing it to maintenance.send
+const customizedSettings = await documentStore.maintenance.send(getDatabaseSettingsOp);
+
+// Get the customized value
+const customizedValue = customizedSettings.settings["Databases.QueryTimeoutInSec"];
+`}
+
+
+**Syntax**:
+
+
+
+{`const getDatabaseSettingsOp = new GetDatabaseSettingsOperation(databaseName);
+`}
+
+
+
+| Parameter | Type | Description |
+|--------------|----------|-------------------------------------------------------------|
+| databaseName | `string` | The database name for which to get the customized settings. |
+
+
+
+
+{`// Executing the operation returns the following object:
+\{
+ settings // An object with key-value configuration pairs
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/_database-settings-operation-php.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/_database-settings-operation-php.mdx
new file mode 100644
index 0000000000..c220393c6c
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/_database-settings-operation-php.mdx
@@ -0,0 +1,134 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* The default database configuration settings can be customized:
+
+ * From the Client API - as described in this article.
+
+ * From Studio - via the [Database Settings](../../../../studio/database/settings/database-settings.mdx#database-settings) view.
+
+* In this page:
+
+ * [Put database settings operation](../../../../client-api/operations/maintenance/configuration/database-settings-operation.mdx#put-database-settings-operation)
+
+ * [Get database settings operation](../../../../client-api/operations/maintenance/configuration/database-settings-operation.mdx#get-database-settings-operation)
+
+
+Do not modify the database settings unless you are an expert and know what you're doing.
+
+
+
+
+## Put database settings operation
+
+* Use `PutDatabaseSettingsOperation` to modify the default database configuration.
+
+* Only **database-level** settings can be customized using this operation.
+ See the [Configuration overview](../../../../server/configuration/configuration-options.mdx) article to learn how to customize the **server-level** settings.
+
+* Note: for the changes to take effect, the database must be **reloaded**.
+ Reloading is accomplished by disabling and enabling the database using [ToggleDatabasesStateOperation](../../../../client-api/operations/server-wide/toggle-databases-state.mdx).
+ See the following example:
+
+
+
+{`// 1. Modify the database settings:
+// ================================
+
+// Define the settings dictionary with the key-value pairs to set, for example:
+$settings = [
+ "Databases.QueryTimeoutInSec" => "350",
+ "Indexing.Static.DeploymentMode" => "Rolling"
+];
+
+// Define the put database settings operation,
+// specify the database name & pass the settings dictionary
+$putDatabaseSettingsOp = new PutDatabaseSettingsOperation($documentStore->getDatabase(), $settings);
+
+// Execute the operation by passing it to Maintenance.Send
+$documentStore->maintenance()->send($putDatabaseSettingsOp);
+
+// 2. RELOAD the database for the change to take effect:
+// =====================================================
+
+// Disable database
+$disableDatabaseOp = new ToggleDatabasesStateOperation($documentStore->getDatabase(), true);
+$documentStore->maintenance()->server()->send($disableDatabaseOp);
+
+// Enable database
+$enableDatabaseOp = new ToggleDatabasesStateOperation($documentStore->getDatabase(), false);
+$documentStore->maintenance()->server()->send($enableDatabaseOp);
+`}
+
+
+**Syntax**:
+
+
+
+{`PutDatabaseSettingsOperation(?string $databaseName, StringMap|array|null $configurationSettings)
+`}
+
+
+
+| Parameter | Type | Description |
+|-----------------------|------------|--------------------------------------------------|
+| $databaseName | `?string` | Name of the database to change the settings for. |
+| $configurationSettings | `StringMap` `array` `null` | The configuration settings to set. |
+
+
+
+
+## Get database settings operation
+
+* Use `GetDatabaseSettingsOperation` to get the configuration settings that were customized for the database.
+
+* Only settings that have been changed will be retrieved.
+
+
+
+{`// Define the get database settings operation, specify the database name
+$getDatabaseSettingsOp = new GetDatabaseSettingsOperation($documentStore->getDatabase());
+
+// Execute the operation by passing it to Maintenance.Send
+/** @var DatabaseSettings $customizedSettings */
+$customizedSettings = $documentStore->maintenance()->send($getDatabaseSettingsOp);
+
+// Get the customized value
+$customizedValue = $customizedSettings->getSettings()["Databases.QueryTimeoutInSec"];
+`}
+
+
+**Syntax**:
+
+
+
+{`GetDatabaseSettingsOperation(?string $databaseName);
+`}
+
+
+
+| Parameter | Type | Description |
+|--------------|-----------|--------------------------------------------------------|
+| $databaseName | `?string` | The database name to get the customized settings for. |
+
+
+
+
+{`// Executing the operation returns the following object:
+class DatabaseSettings
+\{
+// Configuration settings that have been customized
+ private ?StringMap $settings = null;
+ // ...getter and setter
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/_get-client-configuration-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/_get-client-configuration-csharp.mdx
new file mode 100644
index 0000000000..483dbc9e08
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/_get-client-configuration-csharp.mdx
@@ -0,0 +1,75 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* It is recommended to first refer to the **client-configuration description** in the [put client-configuration](../../../../client-api/operations/maintenance/configuration/put-client-configuration.mdx) article.
+
+* Use `GetClientConfigurationOperation` to get the current client-configuration set on the server for the database.
+
+* In this page:
+ * [Get client-configuration](../../../../client-api/operations/maintenance/configuration/get-client-configuration.mdx#get-client-configuration)
+ * [Syntax](../../../../client-api/operations/maintenance/configuration/get-client-configuration.mdx#syntax)
+
+
+## Get client-configuration
+
+
+
+
+{`// Define the get client-configuration operation
+var getClientConfigOp = new GetClientConfigurationOperation();
+
+// Execute the operation by passing it to Maintenance.Send
+GetClientConfigurationOperation.Result result = store.Maintenance.Send(getClientConfigOp);
+
+ClientConfiguration clientConfiguration = result.Configuration;
+`}
+
+
+
+
+{`// Define the get client-configuration operation
+var getClientConfigOp = new GetClientConfigurationOperation();
+
+// Execute the operation by passing it to Maintenance.SendAsync
+GetClientConfigurationOperation.Result config =
+ await store.Maintenance.SendAsync(getClientConfigOp);
+
+ClientConfiguration clientConfiguration = config.Configuration;
+`}
+
+
+
+
+
+
+## Syntax
+
+
+
+{`public GetClientConfigurationOperation()
+`}
+
+
+
+
+
+{`// Executing the operation returns the following object:
+public class Result
+\{
+ // The configuration Etag
+ public long Etag \{ get; set; \}
+
+ // The current client-configuration deployed on the server for the database
+ public ClientConfiguration Configuration \{ get; set; \}
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/_get-client-configuration-java.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/_get-client-configuration-java.mdx
new file mode 100644
index 0000000000..f6bac44b09
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/_get-client-configuration-java.mdx
@@ -0,0 +1,59 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+**GetClientConfigurationOperation** is used to return a client configuration, which is saved on the server and overrides client behavior.
+
+## Syntax
+
+
+
+{`GetClientConfigurationOperation()
+`}
+
+
+
+
+
+{`public static class Result \{
+ private long etag;
+ private ClientConfiguration configuration;
+
+ public long getEtag() \{
+ return etag;
+ \}
+
+ public void setEtag(long etag) \{
+ this.etag = etag;
+ \}
+
+ public ClientConfiguration getConfiguration() \{
+ return configuration;
+ \}
+
+ public void setConfiguration(ClientConfiguration configuration) \{
+ this.configuration = configuration;
+ \}
+\}
+`}
+
+
+
+| Return Value | | |
+| ------------- | ----- | ---- |
+| **Etag** | String | Etag of configuration |
+| **Configuration** | `ClientConfiguration` | configuration which will be used by the client API |
+
+## Example
+
+
+
+{`GetClientConfigurationOperation.Result config
+ = store.maintenance().send(new GetClientConfigurationOperation());
+ClientConfiguration clientConfiguration = config.getConfiguration();
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/_get-client-configuration-nodejs.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/_get-client-configuration-nodejs.mdx
new file mode 100644
index 0000000000..4bb6096ee2
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/_get-client-configuration-nodejs.mdx
@@ -0,0 +1,67 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* It is recommended to first refer to the **client-configuration description** in the [put client-configuration](../../../../client-api/operations/maintenance/configuration/put-client-configuration.mdx) article.
+
+* Use `GetClientConfigurationOperation` to get the current client-configuration set on the server for the database.
+
+* In this page:
+ * [Get client-configuration](../../../../client-api/operations/maintenance/configuration/get-client-configuration.mdx#get-client-configuration)
+ * [Syntax](../../../../client-api/operations/maintenance/configuration/get-client-configuration.mdx#syntax)
+
+
+## Get client-configuration
+
+
+
+{`// Define the get client-configuration operation
+const getClientConfigOp = new GetClientConfigurationOperation();
+
+// Execute the operation by passing it to maintenance.send
+const result = await store.maintenance.send(getClientConfigOp);
+
+const configuration = result.configuration;
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`const getClientConfigOp = new GetClientConfigurationOperation();
+`}
+
+
+
+
+
+{`// Object returned from store.maintenance.send(getClientConfigOp):
+\{
+ etag,
+ configuration // The configution object
+\}
+
+// The configuration object:
+\{
+ identityPartsSeparator,
+ etag,
+ disabled,
+ maxNumberOfRequestsPerSession,
+ readBalanceBehavior,
+ loadBalanceBehavior,
+ loadBalancerContextSeed
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/_get-client-configuration-php.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/_get-client-configuration-php.mdx
new file mode 100644
index 0000000000..523ea48a7e
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/_get-client-configuration-php.mdx
@@ -0,0 +1,59 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* It is recommended to first refer to the **client-configuration description** in the [put client-configuration](../../../../client-api/operations/maintenance/configuration/put-client-configuration.mdx) article.
+
+* Use `GetClientConfigurationOperation` to get the current client-configuration set on the server for the database.
+
+* In this page:
+ * [Get client-configuration](../../../../client-api/operations/maintenance/configuration/get-client-configuration.mdx#get-client-configuration)
+ * [Syntax](../../../../client-api/operations/maintenance/configuration/get-client-configuration.mdx#syntax)
+
+
+## Get client-configuration
+
+
+
+{`// Define the get client-configuration operation
+$getClientConfigOp = new GetClientConfigurationOperation();
+
+// Execute the operation by passing it to Maintenance.Send
+/** @var GetClientConfigurationResult $result */
+$result = $store->maintenance()->send($getClientConfigOp);
+
+$clientConfiguration = $result->getConfiguration();
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`public GetClientConfigurationOperation()
+`}
+
+
+
+
+
+{`// Executing the operation returns the following object:
+class GetClientConfigurationResult implements ResultInterface
+ private ?int $etag = null;
+ private ?ClientConfiguration $configuration;
+
+ // ... getters and setters
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/_get-client-configuration-python.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/_get-client-configuration-python.mdx
new file mode 100644
index 0000000000..10d5210139
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/_get-client-configuration-python.mdx
@@ -0,0 +1,60 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* It is recommended to first refer to the **client-configuration description** in the [put client-configuration](../../../../client-api/operations/maintenance/configuration/put-client-configuration.mdx) article.
+
+* Use `GetClientConfigurationOperation` to get the current client-configuration set on the server for the database.
+
+* In this page:
+ * [Get client-configuration](../../../../client-api/operations/maintenance/configuration/get-client-configuration.mdx#get-client-configuration)
+ * [Syntax](../../../../client-api/operations/maintenance/configuration/get-client-configuration.mdx#syntax)
+
+
+## Get client-configuration
+
+
+
+{`# Define the get client-configuration operation
+get_client_config_op = GetClientConfigurationOperation()
+
+# Execute the operation by passing it to maintenance.send
+result = store.maintenance.send(get_client_config_op)
+
+client_configuration = result.configuration
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`class GetClientConfigurationOperation(MaintenanceOperation): ...
+
+# no __init__ (default)
+`}
+
+
+
+
+
+{`# Executing the operation returns the following object:
+class Result:
+ def __init__(self, etag: int, configuration: ClientConfiguration):
+ # The configuration Etag
+ self.etag = etag
+ # The current client-configuration deployed on the server for the database
+ self.configuration = configuration
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/_put-client-configuration-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/_put-client-configuration-csharp.mdx
new file mode 100644
index 0000000000..df765bc1fa
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/_put-client-configuration-csharp.mdx
@@ -0,0 +1,158 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* The **client configuration** is a set of configuration options applied during
+ client-server communication.
+* The initial client configuration can be set by the client when creating the Document Store.
+* A database administrator can modify the current client configuration on the server using the
+ `PutClientConfigurationOperation` operation or via Studio, to gain dynamic control over
+ client-server communication.
+ The client will be updated with the modified configuration the next time it sends a request to the database.
+* In this page:
+
+ * [Client configuration overview and modification](../../../../client-api/operations/maintenance/configuration/put-client-configuration.mdx#client-configuration-overview-and-modification)
+ * [What can be configured](../../../../client-api/operations/maintenance/configuration/put-client-configuration.mdx#what-can-be-configured)
+ * [Put client configuration (for database)](../../../../client-api/operations/maintenance/configuration/put-client-configuration.mdx#put-client-configuration-(for-database))
+ * [Syntax](../../../../client-api/operations/maintenance/configuration/put-client-configuration.mdx#syntax)
+
+
+## Client configuration overview and modification
+
+* **What is the client configuration**:
+ The client configuration is a set of configuration options that apply to the client when communicating with the database.
+ See [what can be configured](../../../../client-api/operations/maintenance/configuration/put-client-configuration.mdx#what-can-be-configured) below.
+
+* **Initializing the client configuration** (on the client):
+ This configuration can be initially customized from the client code when creating the Document Store via the [Conventions](../../../../client-api/configuration/conventions.mdx).
+
+* **Overriding the initial client configuration for the database** (on the server):
+
+ * From the client code:
+ Use `PutClientConfigurationOperation` to set the client configuration options on the server.
+ See the example below.
+
+ * From the Studio:
+ Set the client configuration from the [Client Configuration view](../../../../studio/database/settings/client-configuration-per-database.mdx).
+
+* **Updating the running client**:
+
+ * Once the client configuration is modified on the server, the running client will [receive the updated settings](../../../../client-api/configuration/load-balance/overview.mdx#keeping-the-client-topology-up-to-date)
+ the next time it makes a request to the database.
+
+ * Setting the client configuration on the server enables administrators to dynamically control
+ the client behavior after it has started running.
+ e.g. manage load balancing of client requests on the fly in response to changing system demands.
+
+* The client configuration set for the database level **overrides** the
+ [server-wide client configuration](../../../../client-api/operations/server-wide/configuration/put-serverwide-client-configuration.mdx).
+
+
+
+## What can be configured
+
+The following client configuration options are available:
+
+* **Identity parts separator**:
+ Set the separator used for automatically generated document IDs (default is `/`).
+ Applies only to [Identity IDs](../../../../server/kb/document-identifier-generation.mdx#identity-id) and [HiLo IDs](../../../../server/kb/document-identifier-generation.mdx#hilo-algorithm-id).
+
+* **Maximum number of requests per session**:
+ Set this number to restrict the number of requests (Reads & Writes) per session in the client API.
+
+* **Read balance behavior**:
+ Set the Read balance method the client will use when accessing a node with Read requests.
+ Learn more in [Balancing client requests - overview](../../../../client-api/configuration/load-balance/overview.mdx) and [Read balance behavior](../../../../client-api/configuration/load-balance/read-balance-behavior.mdx).
+
+* **Load balance behavior**:
+ Set the Load balance method for Read & Write requests.
+ Learn more in [Load balance behavior](../../../../client-api/configuration/load-balance/load-balance-behavior.mdx).
+
+
+
+## Put client configuration (for database)
+
+
+
+{`// You can customize the client-configuration options in the client
+// when creating the Document Store (this is optional):
+// =================================================================
+
+var documentStore = new DocumentStore
+\{
+ Urls = new[] \{ "ServerURL_1", "ServerURL_2", "..." \},
+ Database = "DefaultDB",
+ Conventions = new DocumentConventions
+ \{
+ // Initialize some client-configuration options:
+ MaxNumberOfRequestsPerSession = 100,
+ IdentityPartsSeparator = '$'
+ // ...
+ \}
+\}.Initialize();
+`}
+
+
+
+
+
+{`// Override the initial client-configuration in the server using the put operation:
+// ================================================================================
+
+using (documentStore)
+\{
+ // Define the client-configuration object
+ ClientConfiguration clientConfiguration = new ClientConfiguration
+ \{
+ MaxNumberOfRequestsPerSession = 200,
+ ReadBalanceBehavior = ReadBalanceBehavior.FastestNode
+ // ...
+ \};
+
+ // Define the put client-configuration operation, pass the configuration
+ var putClientConfigOp = new PutClientConfigurationOperation(clientConfiguration);
+
+ // Execute the operation by passing it to Maintenance.Send
+ documentStore.Maintenance.Send(putClientConfigOp);
+\}
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`public PutClientConfigurationOperation(ClientConfiguration configuration)
+`}
+
+
+
+| Parameter | Type | Description |
+|-------------------|-----------------------|------------------------------------------------------------------------|
+| **configuration** | `ClientConfiguration` | Client configuration that will be set on the server (for the database) |
+
+
+
+{`public class ClientConfiguration
+\{
+ public long Etag \{ get; set; \}
+ public bool Disabled \{ get; set; \}
+ public int? MaxNumberOfRequestsPerSession \{ get; set; \}
+ public ReadBalanceBehavior? ReadBalanceBehavior \{ get; set; \}
+ public LoadBalanceBehavior? LoadBalanceBehavior \{ get; set; \}
+ public int? LoadBalancerContextSeed \{ get; set; \}
+ public char? IdentityPartsSeparator; // can be any character except '|'
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/_put-client-configuration-java.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/_put-client-configuration-java.mdx
new file mode 100644
index 0000000000..1c58d31237
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/_put-client-configuration-java.mdx
@@ -0,0 +1,35 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+**PutClientConfigurationOperation** is used to save a client configuration on the server. It allows you to override client's settings remotely.
+
+## Syntax
+
+
+
+{`PutClientConfigurationCommand(ClientConfiguration configuration)
+`}
+
+
+
+| Return Value | | |
+| ------------- | ----- | ---- |
+| **configuration** | `ClientConfiguration` | configuration which will be used by client API |
+
+## Example
+
+
+
+{`ClientConfiguration clientConfiguration = new ClientConfiguration();
+clientConfiguration.setMaxNumberOfRequestsPerSession(100);
+clientConfiguration.setReadBalanceBehavior(ReadBalanceBehavior.FASTEST_NODE);
+
+store.maintenance().send(
+ new PutClientConfigurationOperation(clientConfiguration));
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/_put-client-configuration-nodejs.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/_put-client-configuration-nodejs.mdx
new file mode 100644
index 0000000000..081a49d831
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/_put-client-configuration-nodejs.mdx
@@ -0,0 +1,149 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* The **client configuration** is a set of configuration options applied during
+ client-server communication.
+* The initial client configuration can be set by the client when creating the Document Store.
+* A database administrator can modify the current client configuration on the server using the
+ `PutClientConfigurationOperation` operation or via Studio, to gain dynamic control over
+ client-server communication.
+ The client will be updated with the modified configuration the next time it sends a request to the database.
+* In this page:
+
+ * [Client configuration overview and modification](../../../../client-api/operations/maintenance/configuration/put-client-configuration.mdx#client-configuration-overview-and-modification)
+ * [What can be configured](../../../../client-api/operations/maintenance/configuration/put-client-configuration.mdx#what-can-be-configured)
+ * [Put client configuration (for database)](../../../../client-api/operations/maintenance/configuration/put-client-configuration.mdx#put-client-configuration-(for-database))
+ * [Syntax](../../../../client-api/operations/maintenance/configuration/put-client-configuration.mdx#syntax)
+
+
+## Client configuration overview and modification
+
+* **What is the client configuration**:
+ The client configuration is a set of configuration options that apply to the client when communicating with the database.
+ See [what can be configured](../../../../client-api/operations/maintenance/configuration/put-client-configuration.mdx#what-can-be-configured) below.
+
+* **Initializing the client configuration** (on the client):
+ This configuration can be initially customized from the client code when creating the Document Store via the [Conventions](../../../../client-api/configuration/conventions.mdx).
+
+* **Overriding the initial client configuration for the database** (on the server):
+
+ * From the client code:
+ Use `PutClientConfigurationOperation` to set the client configuration options on the server.
+ See the example below.
+
+ * From the Studio:
+ Set the client configuration from the [Client Configuration](../../../../studio/database/settings/client-configuration-per-database.mdx) view.
+
+* **Updating the running client**:
+
+ * Once the client configuration is modified on the server, the running client will [receive the updated settings](../../../../client-api/configuration/load-balance/overview.mdx#keeping-the-client-topology-up-to-date)
+ the next time it makes a request to the database.
+
+ * Setting the client configuration on the server enables administrators to dynamically control
+ the client behavior after it has started running.
+ e.g. manage load balancing of client requests on the fly in response to changing system demands.
+
+* The client configuration set for the database level **overrides** the
+ [server-wide client configuration](../../../../client-api/operations/server-wide/configuration/put-serverwide-client-configuration.mdx).
+
+
+
+## What can be configured
+
+The following client configuration options are available:
+
+* **Identity parts separator**:
+ Set the separator used for automatically generated document IDs (default is `/`).
+ Applies only to [Identity IDs](../../../../server/kb/document-identifier-generation.mdx#identity-id) and [HiLo IDs](../../../../server/kb/document-identifier-generation.mdx#hilo-algorithm-id).
+
+* **Maximum number of requests per session**:
+ Set this number to restrict the number of requests (Reads & Writes) per session in the client API.
+
+* **Read balance behavior**:
+ Set the Read balance method the client will use when accessing a node with Read requests.
+ Learn more in [Balancing client requests - overview](../../../../client-api/configuration/load-balance/overview.mdx) and [Read balance behavior](../../../../client-api/configuration/load-balance/read-balance-behavior.mdx).
+
+* **Load balance behavior**:
+ Set the Load balance method for Read & Write requests.
+ Learn more in [Load balance behavior](../../../../client-api/configuration/load-balance/load-balance-behavior.mdx).
+
+
+
+## Put client-configuration (for-database)
+
+
+
+{`// You can customize the client-configuration options in the client
+// when creating the Document Store (this is optional):
+// =================================================================
+
+const documentStore = new DocumentStore(["serverUrl_1", "serverUrl_2", "..."], "DefaultDB");
+
+documentStore.conventions.maxNumberOfRequestsPerSession = 100;
+documentStore.conventions.identityPartsSeparator = '$';
+// ...
+
+documentStore.initialize();
+`}
+
+
+
+
+
+{`// Override the initial client-configuration in the server using the put operation:
+// ================================================================================
+
+// Define the client-configuration object
+const clientConfiguration = \{
+ maxNumberOfRequestsPerSession: 200,
+ readBalanceBehavior: "FastestNode",
+ // ...
+\};
+
+// Define the put client-configuration operation, pass the configuration
+const putClientConfigOp = new PutClientConfigurationOperation(clientConfiguration);
+
+// Execute the operation by passing it to maintenance.send
+await documentStore.maintenance.send(putClientConfigOp);
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`const putClientConfigOp = new PutClientConfigurationOperation(configuration);
+`}
+
+
+
+| Parameter | Type | Description |
+|-------------------|----------|------------------------------------------------------------------------|
+| **configuration** | `object` | Client configuration that will be set on the server (for the database) |
+
+
+
+{`// The client-configuration object
+\{
+ identityPartsSeparator,
+ etag,
+ disabled,
+ maxNumberOfRequestsPerSession,
+ readBalanceBehavior,
+ loadBalanceBehavior,
+ loadBalancerContextSeed
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/_put-client-configuration-php.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/_put-client-configuration-php.mdx
new file mode 100644
index 0000000000..5064006839
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/_put-client-configuration-php.mdx
@@ -0,0 +1,159 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* The **client configuration** is a set of configuration options applied during
+ client-server communication.
+* The initial client configuration can be set by the client when creating the Document Store.
+* A database administrator can modify the current client configuration on the server using the
+ `PutClientConfigurationOperation` operation or via Studio, to gain dynamic control over
+ client-server communication.
+ The client will be updated with the modified configuration the next time it sends a request to the database.
+* In this page:
+
+ * [Client configuration overview and modification](../../../../client-api/operations/maintenance/configuration/put-client-configuration.mdx#client-configuration-overview-and-modification)
+ * [What can be configured](../../../../client-api/operations/maintenance/configuration/put-client-configuration.mdx#what-can-be-configured)
+ * [Put client configuration (for database)](../../../../client-api/operations/maintenance/configuration/put-client-configuration.mdx#put-client-configuration-(for-database))
+ * [Syntax](../../../../client-api/operations/maintenance/configuration/put-client-configuration.mdx#syntax)
+
+
+## Client configuration overview and modification
+
+* **What is the client configuration**:
+ The client configuration is a set of configuration options that apply to the client when communicating with the database.
+ See [what can be configured](../../../../client-api/operations/maintenance/configuration/put-client-configuration.mdx#what-can-be-configured) below.
+
+* **Initializing the client configuration** (on the client):
+ This configuration can be initially customized from the client code when creating the Document Store via the [Conventions](../../../../client-api/configuration/conventions.mdx).
+
+* **Overriding the initial client configuration for the database** (on the server):
+
+ * From the client code:
+ Use `PutClientConfigurationOperation` to set the client configuration options on the server.
+ See the example below.
+
+ * From Studio:
+ Set the client configuration from the [Client Configuration](../../../../studio/database/settings/client-configuration-per-database.mdx) view.
+
+* **Updating the running client**:
+
+ * Once the client configuration is modified on the server, the running client will [receive the updated settings](../../../../client-api/configuration/load-balance/overview.mdx#keeping-the-client-topology-up-to-date)
+ the next time it makes a request to the database.
+
+ * Setting the client configuration on the server enables administrators to dynamically control
+ the client behavior after it has started running.
+ e.g. manage load balancing of client requests on the fly in response to changing system demands.
+
+* The client configuration set for the database level **overrides** the
+ [server-wide client configuration](../../../../client-api/operations/server-wide/configuration/put-serverwide-client-configuration.mdx).
+
+
+
+## What can be configured
+
+The following client configuration options are available:
+
+* **Identity parts separator**:
+ Set the separator used for automatically generated document IDs (default is `/`).
+ Applies only to [Identity IDs](../../../../server/kb/document-identifier-generation.mdx#identity-id) and [HiLo IDs](../../../../server/kb/document-identifier-generation.mdx#hilo-algorithm-id).
+
+* **Maximum number of requests per session**:
+ Set this number to restrict the number of requests (Reads & Writes) per session in the client API.
+
+* **Read balance behavior**:
+ Set the Read balance method the client will use when accessing a node with Read requests.
+ Learn more in [Balancing client requests - overview](../../../../client-api/configuration/load-balance/overview.mdx) and [Read balance behavior](../../../../client-api/configuration/load-balance/read-balance-behavior.mdx).
+
+* **Load balance behavior**:
+ Set the Load balance method for Read & Write requests.
+ Learn more in [Load balance behavior](../../../../client-api/configuration/load-balance/load-balance-behavior.mdx).
+
+
+
+## Put client configuration (for database)
+
+
+
+{`// You can customize the client-configuration options in the client
+// when creating the Document Store (this is optional):
+// =================================================================
+
+$urls = ["ServerURL_1", "ServerURL_2", "..."];
+$database = "DefaultDB";
+
+$documentStore = new DocumentStore($urls, $database);
+
+$conventions = new DocumentConventions();
+$conventions->setMaxNumberOfRequestsPerSession(100);
+$conventions->setIdentityPartsSeparator('$');
+// ....
+
+$documentStore->setConventions($conventions);
+
+$documentStore->initialize();
+`}
+
+
+
+
+
+{`// Override the initial client-configuration in the server using the put operation:
+// ================================================================================
+try \{
+ // Define the client-configuration object
+ $clientConfiguration = new ClientConfiguration();
+ $clientConfiguration->setMaxNumberOfRequestsPerSession(200);
+ $clientConfiguration->setReadBalanceBehavior(ReadBalanceBehavior::fastestNode());
+ // ...
+
+ // Define the put client-configuration operation, pass the configuration
+ $putClientConfigOp = new PutClientConfigurationOperation($clientConfiguration);
+
+ // Execute the operation by passing it to Maintenance.Send
+ $documentStore->maintenance()->send($putClientConfigOp);
+\} finally \{
+ $documentStore->close();
+\}
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`PutClientConfigurationOperation(?ClientConfiguration $configuration)
+`}
+
+
+
+| Parameter | Type | Description |
+|-------------------|-----------------------|------------------------------------------------------------------------|
+| **$configuration** | `?ClientConfiguration` | Client configuration that will be set on the server (for the database) |
+
+
+
+{`class ClientConfiguration
+\{
+ private ?string $identityPartsSeparator = null;
+ private ?int $etag = null;
+ private bool $disabled = false;
+ private ?int $maxNumberOfRequestsPerSession = null;
+ private ?ReadBalanceBehavior $readBalanceBehavior = null;
+ private ?LoadBalanceBehavior $loadBalanceBehavior = null;
+ private ?int $loadBalancerContextSeed = null;
+
+ // ... getters and setters
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/_put-client-configuration-python.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/_put-client-configuration-python.mdx
new file mode 100644
index 0000000000..d20b190479
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/_put-client-configuration-python.mdx
@@ -0,0 +1,149 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* The **client configuration** is a set of configuration options applied during
+ client-server communication.
+* The initial client configuration can be set by the client when creating the Document Store.
+* A database administrator can modify the current client configuration on the server using the
+ `PutClientConfigurationOperation` operation or via Studio, to gain dynamic control over
+ client-server communication.
+ The client will be updated with the modified configuration the next time it sends a request to the database.
+* In this page:
+
+ * [Client configuration overview and modification](../../../../client-api/operations/maintenance/configuration/put-client-configuration.mdx#client-configuration-overview-and-modification)
+ * [What can be configured](../../../../client-api/operations/maintenance/configuration/put-client-configuration.mdx#what-can-be-configured)
+ * [Put client configuration (for database)](../../../../client-api/operations/maintenance/configuration/put-client-configuration.mdx#put-client-configuration-(for-database))
+ * [Syntax](../../../../client-api/operations/maintenance/configuration/put-client-configuration.mdx#syntax)
+
+
+## Client configuration overview and modification
+
+* **What is the client configuration**:
+ The client configuration is a set of configuration options that apply to the client when communicating with the database.
+ See [what can be configured](../../../../client-api/operations/maintenance/configuration/put-client-configuration.mdx#what-can-be-configured) below.
+
+* **Initializing the client configuration** (on the client):
+ This configuration can be initially customized from the client code when creating the Document Store via the [Conventions](../../../../client-api/configuration/conventions.mdx).
+
+* **Overriding the initial client configuration for the database** (on the server):
+
+ * From the client code:
+ Use `PutClientConfigurationOperation` to set the client configuration options on the server.
+ See the example below.
+
+ * From the Studio:
+ Set the client configuration from the [Client Configuration](../../../../studio/database/settings/client-configuration-per-database.mdx) view.
+
+* **Updating the running client**:
+
+ * Once the client configuration is modified on the server, the running client will [receive the updated settings](../../../../client-api/configuration/load-balance/overview.mdx#keeping-the-client-topology-up-to-date)
+ the next time it makes a request to the database.
+
+ * Setting the client configuration on the server enables administrators to dynamically control
+ the client behavior after it has started running.
+ e.g. manage load balancing of client requests on the fly in response to changing system demands.
+
+* The client configuration set for the database level **overrides** the
+ [server-wide client configuration](../../../../client-api/operations/server-wide/configuration/put-serverwide-client-configuration.mdx).
+
+
+
+## What can be configured
+
+The following client configuration options are available:
+
+* **Identity parts separator**:
+ Set the separator used for automatically generated document IDs (default is `/`).
+ Applies only to [Identity IDs](../../../../server/kb/document-identifier-generation.mdx#identity-id) and [HiLo IDs](../../../../server/kb/document-identifier-generation.mdx#hilo-algorithm-id).
+
+* **Maximum number of requests per session**:
+ Set this number to restrict the number of requests (Reads & Writes) per session in the client API.
+
+* **Read balance behavior**:
+ Set the Read balance method the client will use when accessing a node with Read requests.
+ Learn more in [Balancing client requests - overview](../../../../client-api/configuration/load-balance/overview.mdx) and [Read balance behavior](../../../../client-api/configuration/load-balance/read-balance-behavior.mdx).
+
+* **Load balance behavior**:
+ Set the Load balance method for Read & Write requests.
+ Learn more in [Load balance behavior](../../../../client-api/configuration/load-balance/load-balance-behavior.mdx).
+
+
+
+## Put client configuration (for database)
+
+
+
+{`# You can customize the client-configuration options in the client
+# when creating the Document Store (this is optional):
+# =================================================================
+document_store = DocumentStore(urls=["ServerURL_1", "ServerURL_2", "..."], database="DefaultDB")
+document_store.conventions = DocumentConventions()
+
+# Initialize some client-configuration options:
+document_store.conventions.max_number_of_requests_per_session = 100
+document_store.conventions.identity_parts_separator = "$"
+# ...
+
+document_store.initialize()
+`}
+
+
+
+
+
+{`# Override the initial client-configuration in the server using the put operation:
+# ================================================================================
+with document_store:
+ # Define the client-configuration object
+ client_configuration = ClientConfiguration()
+ client_configuration.max_number_of_requests_per_session = 200
+ client_configuration.read_balance_behavior = ReadBalanceBehavior.FASTEST_NODE
+ # ...
+
+# Define the put client-configuration operation, pass the configuration
+put_client_config_op = PutClientConfigurationOperation(client_configuration)
+
+# Execute the operation by passing it to maintenance.send
+document_store.maintenance.send(put_client_config_op)
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`class PutClientConfigurationOperation(VoidMaintenanceOperation):
+ def __init__(self, config: ClientConfiguration): ...
+`}
+
+
+
+| Parameter | Type | Description |
+|-------------------|-----------------------|------------------------------------------------------------------------|
+| **config** | `ClientConfiguration` | Client configuration that will be set on the server (for the database) |
+
+
+
+{`class ClientConfiguration:
+ def __init__(self):
+ self.__identity_parts_separator: Union[None, str] = None
+ self.etag: int = 0
+ self.disabled: bool = False
+ self.max_number_of_requests_per_session: Optional[int] = None
+ self.read_balance_behavior: Optional[ReadBalanceBehavior] = None
+ self.load_balance_behavior: Optional[LoadBalanceBehavior] = None
+ self.load_balancer_context_seed: Optional[int] = None
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/database-settings-operation.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/database-settings-operation.mdx
new file mode 100644
index 0000000000..87875c0d6e
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/database-settings-operation.mdx
@@ -0,0 +1,43 @@
+---
+title: "Database Settings Operations"
+hide_table_of_contents: true
+sidebar_label: Database Settings Operations
+sidebar_position: 0
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import DatabaseSettingsOperationCsharp from './_database-settings-operation-csharp.mdx';
+import DatabaseSettingsOperationPhp from './_database-settings-operation-php.mdx';
+import DatabaseSettingsOperationNodejs from './_database-settings-operation-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "php", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/get-client-configuration.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/get-client-configuration.mdx
new file mode 100644
index 0000000000..5a2ba7e631
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/get-client-configuration.mdx
@@ -0,0 +1,53 @@
+---
+title: "Get Client Configuration Operation (for database)"
+hide_table_of_contents: true
+sidebar_label: Get Client Configuration
+sidebar_position: 2
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import GetClientConfigurationCsharp from './_get-client-configuration-csharp.mdx';
+import GetClientConfigurationJava from './_get-client-configuration-java.mdx';
+import GetClientConfigurationPython from './_get-client-configuration-python.mdx';
+import GetClientConfigurationPhp from './_get-client-configuration-php.mdx';
+import GetClientConfigurationNodejs from './_get-client-configuration-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "python", "php", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/put-client-configuration.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/put-client-configuration.mdx
new file mode 100644
index 0000000000..fbc0d6fa0f
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/configuration/put-client-configuration.mdx
@@ -0,0 +1,57 @@
+---
+title: "Put Client Configuration Operation (for database)"
+hide_table_of_contents: true
+sidebar_label: Put Client Configuration
+sidebar_position: 1
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import PutClientConfigurationCsharp from './_put-client-configuration-csharp.mdx';
+import PutClientConfigurationJava from './_put-client-configuration-java.mdx';
+import PutClientConfigurationPython from './_put-client-configuration-python.mdx';
+import PutClientConfigurationPhp from './_put-client-configuration-php.mdx';
+import PutClientConfigurationNodejs from './_put-client-configuration-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "python", "php", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/connection-strings/_add-connection-string-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/connection-strings/_add-connection-string-csharp.mdx
new file mode 100644
index 0000000000..182fb4d22a
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/connection-strings/_add-connection-string-csharp.mdx
@@ -0,0 +1,357 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use the [PutConnectionStringOperation](../../../../client-api/operations/maintenance/connection-strings/add-connection-string.mdx#the%c2%a0putconnectionstringoperation%c2%a0method) method to define a connection string in your database.
+
+* In this page:
+ * [Add a RavenDB connection string](../../../../client-api/operations/maintenance/connection-strings/add-connection-string.mdx#add-a-ravendb-connection-string)
+ * [Add an SQL connection string](../../../../client-api/operations/maintenance/connection-strings/add-connection-string.mdx#add-an-sql-connection-string)
+ * [Add a Snowflake connection string](../../../../client-api/operations/maintenance/connection-strings/add-connection-string.mdx#add-a-snowflake-connection-string)
+ * [Add an OLAP connection string](../../../../client-api/operations/maintenance/connection-strings/add-connection-string.mdx#add-an-olap-connection-string)
+ * [Add an Elasticsearch connection string](../../../../client-api/operations/maintenance/connection-strings/add-connection-string.mdx#add-an-elasticsearch-connection-string)
+ * [Add a Kafka connection string](../../../../client-api/operations/maintenance/connection-strings/add-connection-string.mdx#add-a-kafka-connection-string)
+ * [Add a RabbitMQ connection string](../../../../client-api/operations/maintenance/connection-strings/add-connection-string.mdx#add-a-rabbitmq-connection-string)
+ * [Add an Azure Queue Storage connection string](../../../../client-api/operations/maintenance/connection-strings/add-connection-string.mdx#add-an-azure-queue-storage-connection-string)
+ * [Add an Amazon SQS connection string](../../../../client-api/operations/maintenance/connection-strings/add-connection-string.mdx#add-an-amazon-sqs-connection-string)
+ * [The PutConnectionStringOperation method](../../../../client-api/operations/maintenance/connection-strings/add-connection-string.mdx#the%c2%a0putconnectionstringoperation%c2%a0method)
+
+
+## Add a RavenDB connection string
+
+RavenDB connection strings are used by RavenDB [RavenDB ETL Tasks](../../../../server/ongoing-tasks/etl/raven.mdx).
+
+#### Example:
+
+
+{`// Define a connection string to a RavenDB database destination
+// ============================================================
+var ravenDBConStr = new RavenConnectionString
+\{
+ Name = "ravendb-connection-string-name",
+ Database = "target-database-name",
+ TopologyDiscoveryUrls = new[] \{ "https://rvn2:8080" \}
+\};
+
+// Deploy (send) the connection string to the server via the PutConnectionStringOperation
+// ======================================================================================
+var PutConnectionStringOp = new PutConnectionStringOperation(ravenDBConStr);
+PutConnectionStringResult connectionStringResult = store.Maintenance.Send(PutConnectionStringOp);
+`}
+
+
+
+#### Syntax:
+
+
+{`public class RavenConnectionString : ConnectionString
+\{
+ public override ConnectionStringType Type => ConnectionStringType.Raven;
+
+ public string Database \{ get; set; \} // Target database name
+ public string[] TopologyDiscoveryUrls; // List of server urls in the target RavenDB cluster
+\}
+`}
+
+
+
+
+
+**Secure servers**
+
+To [connect to secure RavenDB servers](../../../../server/security/authentication/certificate-management.mdx#enabling-communication-between-servers:-importing-and-exporting-certificates)
+you need to:
+1. Export the server certificate from the source server.
+2. Install it as a client certificate on the destination server.
+
+This can be done from the Studio [Certificates view](../../../../server/security/authentication/certificate-management.mdx#studio-certificates-management-view).
+
+
+
+
+
+## Add an SQL connection string
+
+SQL connection strings are used by RavenDB [SQL ETL Tasks](../../../../server/ongoing-tasks/etl/sql.mdx).
+
+#### Example:
+
+
+{`// Define a connection string to a SQL database destination
+// ========================================================
+var sqlConStr = new SqlConnectionString
+\{
+ Name = "sql-connection-string-name",
+
+ // Define destination factory name
+ FactoryName = "MySql.Data.MySqlClient",
+
+ // Define the destination database
+ // May also need to define authentication and encryption parameters
+ // By default, encrypted databases are sent over encrypted channels
+ ConnectionString = "host=127.0.0.1;user=root;database=Northwind"
+\};
+
+// Deploy (send) the connection string to the server via the PutConnectionStringOperation
+// ======================================================================================
+var PutConnectionStringOp = new PutConnectionStringOperation(sqlConStr);
+PutConnectionStringResult connectionStringResult = store.Maintenance.Send(PutConnectionStringOp);
+`}
+
+
+
+#### Syntax:
+
+
+{`public class SqlConnectionString : ConnectionString
+\{
+ public override ConnectionStringType Type => ConnectionStringType.Sql;
+
+ public string ConnectionString \{ get; set; \}
+ public string FactoryName \{ get; set; \}
+\}
+`}
+
+
+
+
+
+## Add a Snowflake connection string
+
+[Snowflake connection strings](https://github.com/snowflakedb/snowflake-connector-net/blob/master/doc/Connecting.md)
+are used by RavenDB [Snowflake ETL Tasks](../../../../server/ongoing-tasks/etl/snowflake.mdx).
+
+#### Example:
+
+
+{`// Define a connection string to a Snowflake warehouse database
+// ==========================================================
+var SnowflakeConStr = new SnowflakeConnectionString
+\{
+ Name = "snowflake-connection-string-name",
+ ConnectionString = "ACCOUNT = " + SnowflakeAccount + "; USER = " + SnowflakeUser + "; PASSWORD = " + SnowflakePassword
+\};
+
+// Deploy (send) the connection string to the server via the PutConnectionStringOperation
+// ======================================================================================
+var PutConnectionStringOp =
+ new PutConnectionStringOperation(SnowflakeConStr);
+PutConnectionStringResult connectionStringResult = store.Maintenance.Send(PutConnectionStringOp);
+`}
+
+
+
+
+
+## Add an OLAP connection string
+
+OLAP connection strings are used by RavenDB [OLAP ETL Tasks](../../../../server/ongoing-tasks/etl/olap.mdx).
+
+#### Example: To a local machine
+
+
+{`// Define a connection string to a local OLAP destination
+// ======================================================
+OlapConnectionString olapConStr = new OlapConnectionString
+\{
+ Name = "olap-connection-string-name",
+ LocalSettings = new LocalSettings
+ \{
+ FolderPath = "path-to-local-folder"
+ \}
+\};
+
+// Deploy (send) the connection string to the server via the PutConnectionStringOperation
+// ======================================================================================
+var PutConnectionStringOp = new PutConnectionStringOperation(olapConStr);
+PutConnectionStringResult connectionStringResult = store.Maintenance.Send(PutConnectionStringOp);
+`}
+
+
+
+#### Example: To a cloud-based server
+
+* The following example shows a connection string to Amazon AWS.
+* Adjust the parameters as needed if you are using other cloud-based servers (e.g. Google, Azure, Glacier, S3, FTP).
+* The available parameters are listed in [ETL destination settings](../../../../server/ongoing-tasks/etl/olap.mdx#etl-destination-settings).
+
+
+
+{`// Define a connection string to an AWS OLAP destination
+// =====================================================
+var olapConStr = new OlapConnectionString
+\{
+ Name = "myOlapConnectionStringName",
+ S3Settings = new S3Settings
+ \{
+ BucketName = "myBucket",
+ RemoteFolderName = "my/folder/name",
+ AwsAccessKey = "myAccessKey",
+ AwsSecretKey = "myPassword",
+ AwsRegionName = "us-east-1"
+ \}
+\};
+
+// Deploy (send) the connection string to the server via the PutConnectionStringOperation
+// ======================================================================================
+var PutConnectionStringOp = new PutConnectionStringOperation(olapConStr);
+PutConnectionStringResult connectionStringResult = store.Maintenance.Send(PutConnectionStringOp);
+`}
+
+
+
+#### Syntax:
+
+
+{`public class OlapConnectionString : ConnectionString
+\{
+ public override ConnectionStringType Type => ConnectionStringType.Olap;
+
+ public LocalSettings LocalSettings \{ get; set; \}
+ public S3Settings S3Settings \{ get; set; \}
+ public AzureSettings AzureSettings \{ get; set; \}
+ public GlacierSettings GlacierSettings \{ get; set; \}
+ public GoogleCloudSettings GoogleCloudSettings \{ get; set; \}
+ public FtpSettings FtpSettings \{ get; set; \}
+\}
+`}
+
+
+
+
+
+## Add an Elasticsearch connection string
+
+Elasticsearch connection strings are used by RavenDB [Elasticsearch ETL Tasks](../../../../server/ongoing-tasks/etl/elasticsearch.mdx).
+
+#### Example:
+
+
+{`// Define a connection string to an Elasticsearch destination
+// ==========================================================
+var elasticSearchConStr = new ElasticSearchConnectionString
+\{
+ Name = "elasticsearch-connection-string-name",
+
+ // Elasticsearch Nodes URLs
+ Nodes = new[] \{ "http://localhost:9200" \},
+
+ // Authentication Method
+ Authentication = new Raven.Client.Documents.Operations.ETL.ElasticSearch.Authentication
+ \{
+ Basic = new BasicAuthentication
+ \{
+ Username = "John",
+ Password = "32n4j5kp8"
+ \}
+ \}
+\};
+
+// Deploy (send) the connection string to the server via the PutConnectionStringOperation
+// ======================================================================================
+var PutConnectionStringOp =
+ new PutConnectionStringOperation(elasticSearchConStr);
+PutConnectionStringResult connectionStringResult = store.Maintenance.Send(PutConnectionStringOp);
+`}
+
+
+
+#### Syntax:
+
+
+{`public class ElasticsearchConnectionString : ConnectionString
+\{
+ public override ConnectionStringType Type => ConnectionStringType.ElasticSearch;
+
+ public string Nodes \{ get; set; \}
+ public string Authentication \{ get; set; \}
+ public string Basic \{ get; set; \}
+ public string Username \{ get; set; \}
+ public string Password \{ get; set; \}
+\}
+`}
+
+
+
+
+
+## Add a Kafka connection string
+
+Kafkah connection strings are used by RavenDB [Kafka Queue ETL Tasks](../../../../server/ongoing-tasks/etl/queue-etl/kafka.mdx).
+Learn how to add a Kafka connection string in the [Add a Kafka connection string]( ../../../../server/ongoing-tasks/etl/queue-etl/kafka#add-a-kafka-connection-string) section.
+
+
+
+## Add a RabbitMQ connection string
+
+RabbitMQ connection strings are used by RavenDB [RabbitMQ Queue ETL Tasks](../../../../server/ongoing-tasks/etl/queue-etl/rabbit-mq.mdx).
+Learn how to add a RabbitMQ connection string in the [Add a RabbitMQ connection string]( ../../../../server/ongoing-tasks/etl/queue-etl/rabbit-mq#add-a-rabbitmq-connection-string) section.
+
+
+
+## Add an Azure Queue Storage connection string
+
+Azure Queue Storage connection strings are used by RavenDB [Azure Queue Storage ETL Tasks](../../../../server/ongoing-tasks/etl/queue-etl/azure-queue.mdx).
+Learn to add an Azure Queue Storage connection string in the [Add an Azure Queue Storage connection string]( ../../../../server/ongoing-tasks/etl/queue-etl/azure-queue#add-an-azure-queue-storage-connection-string) section.
+
+
+
+## Add an Amazon SQS connection string
+
+Amazon SQS connection strings are used by RavenDB [Amazon SQS ETL Tasks](../../../../server/ongoing-tasks/etl/queue-etl/amazon-sqs.mdx).
+Learn to add an SQS connection string in [this section](../../../../server/ongoing-tasks/etl/queue-etl/amazon-sqs.mdx#add-an-amazon-sqs-connection-string).
+
+
+
+## The `PutConnectionStringOperation` method
+
+
+
+{`public PutConnectionStringOperation(T connectionString)
+`}
+
+
+
+| Parameters | Type | Description |
+|----------------------|---------------------------------|----------------------------------------------------|
+| **connectionString** | `RavenConnectionString` | Object that defines the RavenDB connection string. |
+| **connectionString** | `SqlConnectionString` | Object that defines the SQL Connection string. |
+| **connectionString** | `SnowflakeConnectionString` | Object that defines the Snowflake connction string. |
+| **connectionString** | `OlapConnectionString` | Object that defines the OLAP connction string. |
+| **connectionString** | `ElasticSearchConnectionString` | Object that defines the Elasticsearch connction string. |
+| **connectionString** | `QueueConnectionString` | Object that defines the connection string for the Queue ETLs tasks (Kafka, RabbitMQ, Azure Queue Storage, and Amazon SQS). |
+
+
+
+{`// All the connection string class types inherit from this abstract ConnectionString class:
+// ========================================================================================
+
+public abstract class ConnectionString
+\{
+ // A name for the connection string
+ public string Name \{ get; set; \}
+
+ // The connection string type
+ public abstract ConnectionStringType Type \{ get; \}
+\}
+
+public enum ConnectionStringType
+\{
+ RavenNone,
+ Raven,
+ Sql,
+ Olap,
+ ElasticSearch,
+ Queue,
+ Snowflake
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/connection-strings/_category_.json b/versioned_docs/version-7.1/client-api/operations/maintenance/connection-strings/_category_.json
new file mode 100644
index 0000000000..3f9c806e15
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/connection-strings/_category_.json
@@ -0,0 +1,4 @@
+{
+ "position": 8,
+ "label": Connection strings,
+}
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/connection-strings/_get-connection-string-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/connection-strings/_get-connection-string-csharp.mdx
new file mode 100644
index 0000000000..91aec46192
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/connection-strings/_get-connection-string-csharp.mdx
@@ -0,0 +1,147 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `GetConnectionStringsOperation` to retrieve properties for a specific connection string
+ or for all connection strings defined in the databse.
+
+* To learn how to create a new connection string, see [Add Connection String Operation](../../../../client-api/operations/maintenance/connection-strings/add-connection-string.mdx).
+
+* In this page:
+ * [Get connection string by name and type](../../../../client-api/operations/maintenance/connection-strings/get-connection-string.mdx#get-connection-string-by-name-and-type)
+ * [Get all connection strings](../../../../client-api/operations/maintenance/connection-strings/get-connection-string.mdx#get-all-connnection-strings)
+ * [Syntax](../../../../client-api/operations/maintenance/connection-strings/get-connection-string.mdx#syntax)
+
+
+## Get connection string by name and type
+
+The following example retrieves a RavenDB Connection String:
+
+
+
+{`using (var store = new DocumentStore())
+\{
+ // Request to get a specific connection string, pass its name and type:
+ // ====================================================================
+ var getRavenConStrOp =
+ new GetConnectionStringsOperation("ravendb-connection-string-name", ConnectionStringType.Raven);
+
+ GetConnectionStringsResult connectionStrings = store.Maintenance.Send(getRavenConStrOp);
+
+ // Access results:
+ // ===============
+ Dictionary ravenConnectionStrings =
+ connectionStrings.RavenConnectionStrings;
+
+ var numberOfRavenConnectionStrings = ravenConnectionStrings.Count;
+ var ravenConStr = ravenConnectionStrings["ravendb-connection-string-name"];
+
+ var targetUrls = ravenConStr.TopologyDiscoveryUrls;
+ var targetDatabase = ravenConStr.Database;
+\}
+`}
+
+
+
+
+
+## Get all connnection strings
+
+
+
+{`using (var store = new DocumentStore())
+\{
+ // Get all connection strings:
+ // ===========================
+ var getAllConStrOp = new GetConnectionStringsOperation();
+ GetConnectionStringsResult allConnectionStrings = store.Maintenance.Send(getAllConStrOp);
+
+ // Access results:
+ // ===============
+
+ // RavenDB
+ Dictionary ravenConnectionStrings =
+ allConnectionStrings.RavenConnectionStrings;
+
+ // SQL
+ Dictionary sqlConnectionStrings =
+ allConnectionStrings.SqlConnectionStrings;
+
+ // OLAP
+ Dictionary olapConnectionStrings =
+ allConnectionStrings.OlapConnectionStrings;
+
+ // Elasticsearch
+ Dictionary elasticsearchConnectionStrings =
+ allConnectionStrings.ElasticSearchConnectionStrings;
+
+ // Access the Queue ETL connection strings in a similar manner:
+ // ============================================================
+ Dictionary queueConnectionStrings =
+ allConnectionStrings.QueueConnectionStrings;
+
+ var kafkaConStr = queueConnectionStrings["kafka-connection-string-name"];
+\}
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`public GetConnectionStringsOperation()
+public GetConnectionStringsOperation(string connectionStringName, ConnectionStringType type)
+`}
+
+
+
+| Parameter | Type | Description |
+|--------------------------|------------------------|--------------------------------------------------------------------------------|
+| **connectionStringName** | `string` | Connection string name |
+| **type** | `ConnectionStringType` | Connection string type: `Raven`, `Sql`, `Olap`, `ElasticSearch`, or `Queue` |
+
+
+
+{`public enum ConnectionStringType
+\{
+ Raven,
+ Sql,
+ Olap,
+ ElasticSearch,
+ Queue
+\}
+`}
+
+
+
+| Return value of `store.Maintenance.Send(GetConnectionStringsOperation)` | |
+|--------------------------------------------------------------------------|---------------------------------------------------------------|
+| `GetConnectionStringsResult` | Class with all connection strings are defined on the database |
+
+
+
+{`public class GetConnectionStringsResult
+\{
+ public Dictionary RavenConnectionStrings \{ get; set; \}
+ public Dictionary SqlConnectionStrings \{ get; set; \}
+ public Dictionary OlapConnectionStrings \{ get; set; \}
+ public Dictionary ElasticSearchConnectionStrings \{ get; set; \}
+ public Dictionary QueueConnectionStrings \{ get; set; \}
+\}
+`}
+
+
+
+
+A detailed syntax for each connection string type is available in article [Add connection string](../../../../client-api/operations/maintenance/connection-strings/add-connection-string.mdx).
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/connection-strings/_remove-connection-string-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/connection-strings/_remove-connection-string-csharp.mdx
new file mode 100644
index 0000000000..9336081ac3
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/connection-strings/_remove-connection-string-csharp.mdx
@@ -0,0 +1,57 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `RemoveConnectionStringOperation` to remove a connection string definition from the database.
+
+* In this page:
+ * [Remove connection string](../../../../client-api/operations/maintenance/connection-strings/remove-connection-string.mdx#remove-connecion-string)
+ * [Syntax](../../../../client-api/operations/maintenance/connection-strings/remove-connection-string.mdx#syntax)
+
+
+## Remove connection string
+
+The following example removes a RavenDB Connection String.
+
+
+
+{`var ravenConnectionString = new RavenConnectionString()
+\{
+ // Note:
+ // Only the 'Name' property of the connection string is needed for the remove operation.
+ // Other properties are not considered.
+ Name = "ravendb-connection-string-name"
+\};
+
+// Define the remove connection string operation,
+// pass the connection string to be removed.
+var removeConStrOp
+ = new RemoveConnectionStringOperation(ravenConnectionString);
+
+// Execute the operation by passing it to Maintenance.Send
+store.Maintenance.Send(removeConStrOp);
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`public RemoveConnectionStringOperation(T connectionString)
+`}
+
+
+
+| Parameter | Type | Description |
+|----------------------|-------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **connectionString** | `T` | Connection string to remove: `RavenConnectionString` `SqlConnectionString` `OlapConnectionString` `ElasticSearchConnectionString` `QueueConnectionString` |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/connection-strings/add-connection-string.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/connection-strings/add-connection-string.mdx
new file mode 100644
index 0000000000..1c01bc98f7
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/connection-strings/add-connection-string.mdx
@@ -0,0 +1,36 @@
+---
+title: "Add Connection String Operation"
+hide_table_of_contents: true
+sidebar_label: Add Connection String
+sidebar_position: 0
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import AddConnectionStringCsharp from './_add-connection-string-csharp.mdx';
+
+export const supportedLanguages = ["csharp"];
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/connection-strings/get-connection-string.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/connection-strings/get-connection-string.mdx
new file mode 100644
index 0000000000..7c11b2c6bc
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/connection-strings/get-connection-string.mdx
@@ -0,0 +1,28 @@
+---
+title: "Get Connection String Operation"
+hide_table_of_contents: true
+sidebar_label: Get Connection String
+sidebar_position: 1
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import GetConnectionStringCsharp from './_get-connection-string-csharp.mdx';
+
+export const supportedLanguages = ["csharp"];
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/connection-strings/remove-connection-string.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/connection-strings/remove-connection-string.mdx
new file mode 100644
index 0000000000..97c9634ccf
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/connection-strings/remove-connection-string.mdx
@@ -0,0 +1,28 @@
+---
+title: "Remove Connection String Operation"
+hide_table_of_contents: true
+sidebar_label: Remove Connection String
+sidebar_position: 2
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import RemoveConnectionStringCsharp from './_remove-connection-string-csharp.mdx';
+
+export const supportedLanguages = ["csharp"];
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/etl/_add-etl-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/etl/_add-etl-csharp.mdx
new file mode 100644
index 0000000000..8e95b81eb1
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/etl/_add-etl-csharp.mdx
@@ -0,0 +1,378 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use the `AddEtlOperation` method to add a new ongoing ETL task to your database.
+
+* To learn about ETL (Extract, Transfer, Load) ongoing tasks, see the [ETL Basics](../../../../server/ongoing-tasks/etl/basics.mdx) article.
+ To learn how to manage ETL tasks from Studio, see [Ongoing tasks - overview](../../../../studio/database/tasks/ongoing-tasks/general-info.mdx).
+
+* In this page:
+
+ * [Add RavenDB ETL task](../../../../client-api/operations/maintenance/etl/add-etl.mdx#add-ravendb-etl-task)
+ * [Add SQL ETL task](../../../../client-api/operations/maintenance/etl/add-etl.mdx#add-sql-etl-task)
+ * [Add Snowflake ETL task](../../../../client-api/operations/maintenance/etl/add-etl.mdx#add-snowflake-etl-task)
+ * [Add OLAP ETL task](../../../../client-api/operations/maintenance/etl/add-etl.mdx#add-olap-etl-task)
+ * [Add Elasticsearch ETL task](../../../../client-api/operations/maintenance/etl/add-etl.mdx#add-elasticsearch-etl-task)
+ * [Add Kafka ETL task](../../../../client-api/operations/maintenance/etl/add-etl.mdx#add-kafka-etl-task)
+ * [Add RabbitMQ ETL task](../../../../client-api/operations/maintenance/etl/add-etl.mdx#add-rabbitmq-etl-task)
+ * [Add Azure Queue Storage ETL task](../../../../client-api/operations/maintenance/etl/add-etl.mdx#add-azure-queue-storage-etl-task)
+ * [Add Amazon SQS ETL task](../../../../client-api/operations/maintenance/etl/add-etl.mdx#add-amazon-sqs-etl-task)
+ * [Syntax](../../../../client-api/operations/maintenance/etl/add-etl.mdx#syntax)
+
+
+## Add RavenDB ETL task
+
+* Learn about the RavenDB ETL task in the **[RavenDB ETL task](../../../../server/ongoing-tasks/etl/raven.mdx)** article.
+* Learn how to define a connection string for the RavenDB ETL task here: **[Add a RavenDB connection string](../../../../client-api/operations/maintenance/connection-strings/add-connection-string.mdx#add-a-ravendb-connection-string)**
+* To manage the RavenDB ETL task from Studio, see **[Studio: RavenDB ETL task](../../../../studio/database/tasks/ongoing-tasks/ravendb-etl-task.mdx)**.
+The following example adds a RavenDB ETL task:
+
+
+
+{`// Define the RavenDB ETL task configuration object
+// ================================================
+var ravenEtlConfig = new RavenEtlConfiguration
+\{
+ Name = "task-name",
+ ConnectionStringName = "raven-connection-string-name",
+ Transforms =
+ \{
+ new Transformation
+ \{
+ // The script name
+ Name = "script-name",
+
+ // RavenDB collections the script uses
+ Collections = \{ "Employees" \},
+
+ // The transformation script
+ Script = @"loadToEmployees (\{
+ Name: this.FirstName + ' ' + this.LastName,
+ Title: this.Title
+ \});"
+ \}
+ \},
+
+ // Do not prevent task failover to another node (optional)
+ PinToMentorNode = false
+\};
+
+// Define the AddEtlOperation
+// ==========================
+var operation = new AddEtlOperation(ravenEtlConfig);
+
+// Execute the operation by passing it to Maintenance.Send
+// =======================================================
+AddEtlOperationResult result = store.Maintenance.Send(operation);
+`}
+
+
+
+
+
+## Add SQL ETL task
+
+* Learn about the SQL ETL task in the **[SQL ETL task](../../../../server/ongoing-tasks/etl/sql.mdx)** article.
+* Learn how to define a connection string for the SQL ETL task here: **[Add an SQL connection string](../../../../client-api/operations/maintenance/connection-strings/add-connection-string.mdx#add-an-sql-connection-string)**
+The following example adds an SQL ETL task:
+
+
+
+{`// Define the SQL ETL task configuration object
+// ============================================
+var sqlEtlConfig = new SqlEtlConfiguration
+\{
+ Name = "task-name",
+ ConnectionStringName = "sql-connection-string-name",
+ SqlTables =
+ \{
+ new SqlEtlTable \{TableName = "Orders", DocumentIdColumn = "Id", InsertOnlyMode = false\},
+ new SqlEtlTable \{TableName = "OrderLines", DocumentIdColumn = "OrderId", InsertOnlyMode = false\},
+ \},
+ Transforms =
+ \{
+ new Transformation
+ \{
+ Name = "script-name",
+ Collections = \{ "Orders" \},
+ Script = @"var orderData = \{
+ Id: id(this),
+ OrderLinesCount: this.Lines.length,
+ TotalCost: 0
+ \};
+
+ for (var i = 0; i < this.Lines.length; i++) \{
+ var line = this.Lines[i];
+ orderData.TotalCost += line.PricePerUnit;
+
+ // Load to SQL table 'OrderLines'
+ loadToOrderLines(\{
+ OrderId: id(this),
+ Qty: line.Quantity,
+ Product: line.Product,
+ Cost: line.PricePerUnit
+ \});
+ \}
+ orderData.TotalCost = Math.round(orderData.TotalCost * 100) / 100;
+
+ // Load to SQL table 'Orders'
+ loadToOrders(orderData)"
+ \}
+ \},
+
+ // Do not prevent task failover to another node (optional)
+ PinToMentorNode = false
+\};
+
+// Define the AddEtlOperation
+// ===========================
+var operation = new AddEtlOperation(sqlEtlConfig);
+
+// Execute the operation by passing it to Maintenance.Send
+// =======================================================
+AddEtlOperationResult result = store.Maintenance.Send(operation);
+`}
+
+
+
+
+
+## Add Snowflake ETL task
+
+* Learn about the Snowflake ETL task in the **[Snowflake ETL task](../../../../server/ongoing-tasks/etl/snowflake.mdx)** article.
+* Learn how to define a connection string for the Snowflake ETL task here: **[Add a Snowflake connection string](../../../../client-api/operations/maintenance/connection-strings/add-connection-string.mdx#add-a-snowflake-connection-string)**
+The following example adds a Snowflake ETL task:
+
+
+
+{`// Define the Snowflake ETL task configuration object
+// ============================================
+var snowflakeEtlConfig = new SnowflakeEtlConfiguration
+\{
+ Name = "task-name",
+ ConnectionStringName = "snowflake-connection-string-name",
+ SnowflakeTables =
+ \{
+ new SnowflakeEtlTable \{TableName = "Orders", DocumentIdColumn = "Id", InsertOnlyMode = false\},
+ new SnowflakeEtlTable \{TableName = "OrderLines", DocumentIdColumn = "OrderId", InsertOnlyMode = false\},
+ \},
+ Transforms =
+ \{
+ new Transformation
+ \{
+ Name = "script-name",
+ Collections = \{ "Orders" \},
+ Script = @"var orderData = \{
+ Id: id(this),
+ OrderLinesCount: this.Lines.length,
+ TotalCost: 0
+ \};
+
+ for (var i = 0; i < this.Lines.length; i++) \{
+ var line = this.Lines[i];
+ orderData.TotalCost += line.PricePerUnit;
+
+ // Load to SQL table 'OrderLines'
+ loadToOrderLines(\{
+ OrderId: id(this),
+ Qty: line.Quantity,
+ Product: line.Product,
+ Cost: line.PricePerUnit
+ \});
+ \}
+ orderData.TotalCost = Math.round(orderData.TotalCost * 100) / 100;
+
+ // Load to SQL table 'Orders'
+ loadToOrders(orderData)"
+ \}
+ \},
+
+ // Do not prevent task failover to another node (optional)
+ PinToMentorNode = false
+\};
+
+// Define the AddEtlOperation
+// ===========================
+var operation = new AddEtlOperation(snowflakeEtlConfig);
+
+// Execute the operation by passing it to Maintenance.Send
+// =======================================================
+AddEtlOperationResult result = store.Maintenance.Send(operation);
+`}
+
+
+
+
+
+## Add OLAP ETL task
+
+* Learn about the OLAP ETL task in the **[OLAP ETL task](../../../../server/ongoing-tasks/etl/olap.mdx)** article.
+* Learn how to define a connection string for the OLAP ETL task here: **[Add an OLAP connection string](../../../../client-api/operations/maintenance/connection-strings/add-connection-string.mdx#add-an-olap-connection-string)**
+* To manage the OLAP ETL task from Studio, see **[Studio: OLAP ETL task](../../../../studio/database/tasks/ongoing-tasks/olap-etl-task.mdx)**.
+The following example adds an OLAP ETL task:
+
+
+
+{`// Define the OLAP ETL task configuration object
+// =============================================
+var olapEtlConfig = new OlapEtlConfiguration
+\{
+ Name = "task-name",
+ ConnectionStringName = "olap-connection-string-name",
+ Transforms =
+ \{
+ new Transformation
+ \{
+ Name = "script-name",
+ Collections = \{"Orders"\},
+ Script = @"var orderDate = new Date(this.OrderedAt);
+ var year = orderDate.getFullYear();
+ var month = orderDate.getMonth();
+ var key = new Date(year, month);
+ loadToOrders(key, \{
+ Company : this.Company,
+ ShipVia : this.ShipVia
+ \})"
+ \}
+ \}
+\};
+
+// Define the AddEtlOperation
+// ==========================
+var operation = new AddEtlOperation(olapEtlConfig);
+
+// Execute the operation by passing it to Maintenance.Send
+// =======================================================
+AddEtlOperationResult result = store.Maintenance.Send(operation);
+`}
+
+
+
+
+
+## Add Elasticsearch ETL task
+
+* Learn about the Elasticsearch ETL task in the **[Elasticsearch ETL task](../../../../server/ongoing-tasks/etl/elasticsearch.mdx)** article.
+* Learn how to define a connection string for the Elasticsearch ETL task here: **[Add an Elasticsearch connection string](../../../../client-api/operations/maintenance/connection-strings/add-connection-string.mdx#add-an-elasticsearch-connection-string)**
+* To manage the Elasticsearch ETL task from Studio, see **[Studio: Elasticsearch ETL task](../../../../studio/database/tasks/ongoing-tasks/elasticsearch-etl-task.mdx)**.
+The following example adds an Elasticsearch ETL task:
+
+
+
+{`// Define the Elasticsearch ETL task configuration object
+// ======================================================
+var elasticsearchEtlConfig = new ElasticSearchEtlConfiguration
+\{
+ Name = "task-name",
+ ConnectionStringName = "elasticsearch-connection-string-name",
+ ElasticIndexes =
+ \{
+ // Define Elasticsearch Indexes
+ new ElasticSearchIndex
+ \{
+ // Elasticsearch Index name
+ IndexName = "orders",
+ // The Elasticsearch document property that will contain the source RavenDB document id.
+ // Make sure this property is also defined inside the transform script.
+ DocumentIdProperty = "DocId",
+ InsertOnlyMode = false
+ \},
+ new ElasticSearchIndex
+ \{
+ IndexName = "lines",
+ DocumentIdProperty = "OrderLinesCount",
+ // If true, don't send _delete_by_query before appending docs
+ InsertOnlyMode = true
+ \}
+ \},
+ Transforms =
+ \{
+ new Transformation()
+ \{
+ Collections = \{ "Orders" \},
+ Script = @"var orderData = \{
+ DocId: id(this),
+ OrderLinesCount: this.Lines.length,
+ TotalCost: 0
+ \};
+
+ // Write the \`orderData\` as a document to the Elasticsearch 'orders' index
+ loadToOrders(orderData);",
+
+ Name = "script-name"
+ \}
+ \}
+\};
+
+// Define the AddEtlOperation
+// ==========================
+var operation = new AddEtlOperation(elasticsearchEtlConfig);
+
+// Execute the operation by passing it to Maintenance.Send
+// =======================================================
+store.Maintenance.Send(operation);
+`}
+
+
+
+
+
+## Add Kafka ETL task
+
+* Learn about the Kafka ETL task in the **[Kafka ETL task](../../../../server/ongoing-tasks/etl/queue-etl/kafka.mdx)** article.
+* Learn how to define a connection string for the Kafka ETL task here: **[Add a Kafka connection string](../../../../server/ongoing-tasks/etl/queue-etl/kafka.mdx#add-a-kafka-connection-string)**
+* To manage the Kafka ETL task from Studio, see **[Studio: Kafka ETL task](../../../../studio/database/tasks/ongoing-tasks/kafka-etl-task.mdx)**.
+* Examples showing how to add a Kafka ETL task are available in the **[Add a Kafka ETL task](../../../../server/ongoing-tasks/etl/queue-etl/kafka.mdx#add-a-kafka-etl-task)** section.
+
+
+
+## Add RabbitMQ ETL task
+
+* Learn about the RabbitMQ ETL task in the **[RabbitMQ ETL task](../../../../server/ongoing-tasks/etl/queue-etl/rabbit-mq.mdx)** article.
+* Learn how to define a connection string for the RabbitMQ ETL task here: **[Add a RabbitMQ connection string](../../../../server/ongoing-tasks/etl/queue-etl/rabbit-mq.mdx#add-a-rabbitmq-connection-string)**
+* To manage the RabbitMQ ETL task from Studio, see **[Studio: RabbitMQ ETL task](../../../../studio/database/tasks/ongoing-tasks/rabbitmq-etl-task.mdx)**.
+* Examples showing how to add a RabbitMQ ETL task are available in the **[Add a RabbitMQ ETL task](../../../../server/ongoing-tasks/etl/queue-etl/rabbit-mq.mdx#add-a-rabbitmq-etl-task)** section.
+
+
+
+## Add Azure Queue Storage ETL task
+
+* Learn about the Azure Queue Storage ETL task in the **[Azure Queue Storage ETL task](../../../../server/ongoing-tasks/etl/queue-etl/azure-queue.mdx)** article.
+* Learn how to define a connection string for the Azure Queue Storage ETL task here:
+ **[Add an Azure Queue Storage connection string](../../../../server/ongoing-tasks/etl/queue-etl/azure-queue.mdx#add-an-azure-queue-storage-connection-string)**
+* To manage the Azure Queue Storage ETL task from Studio, see **[Studio: Azure Queue Storage ETL task](../../../../studio/database/tasks/ongoing-tasks/azure-queue-storage-etl.mdx)**.
+* Examples showing how to add an Azure Queue Storage ETL task are available in the **[Add a Azure Queue Storage ETL task](../../../../server/ongoing-tasks/etl/queue-etl/azure-queue.mdx#add-an-azure-queue-storage-etl-task)** section.
+
+
+
+## Add Amazon SQS ETL task
+
+* Learn about the AWS SQS ETL task in the **[Amazon SQS ETL task](../../../../server/ongoing-tasks/etl/queue-etl/amazon-sqs.mdx)** article.
+ * [This section](../../../../server/ongoing-tasks/etl/queue-etl/amazon-sqs.mdx#add-an-amazon-sqs-connection-string)
+ shows how to define a connection string to the SQS destination.
+ * [This section](../../../../server/ongoing-tasks/etl/queue-etl/amazon-sqs.mdx#add-an-amazon-sqs-etl-task)
+ shows how to run an ETL task that uses the defined connection string.
+* To learn how to manage the task from Studio, see **[Studio: Amazon SQS ETL Task](../../../../studio/database/tasks/ongoing-tasks/amazon-sqs-etl.mdx)**.
+
+
+
+## Syntax
+
+
+
+{`public AddEtlOperation(EtlConfiguration configuration)
+`}
+
+
+
+| Parameter | Type | Description |
+|-------------------|-----------------------|----------------------------------------------------------------------|
+| **configuration** | `EtlConfiguration` | The ETL configuration object where `T` is the connection string type |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/etl/_add-etl-java.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/etl/_add-etl-java.mdx
new file mode 100644
index 0000000000..722c67c680
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/etl/_add-etl-java.mdx
@@ -0,0 +1,159 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use the `AddEtlOperation` method to add a new ongoing ETL task to your database.
+
+* To learn about ETL (Extract, Transfer, Load) ongoing tasks, see article [ETL Basics](../../../../server/ongoing-tasks/etl/basics.mdx).
+ To learn how to manage ETL tasks from the Studio, see [Ongoing tasks - overview](../../../../studio/database/tasks/ongoing-tasks/general-info.mdx).
+
+* In this page:
+ * [Example - add Raven ETL](../../../../client-api/operations/maintenance/etl/add-etl.mdx#example---add-raven-etl)
+ * [Example - add SQL ETL](../../../../client-api/operations/maintenance/etl/add-etl.mdx#example---add-sql-etl)
+ * [Example - add OLAP ETL](../../../../client-api/operations/maintenance/etl/add-etl.mdx#example---add-olap-etl)
+ * [Syntax](../../../../client-api/operations/maintenance/etl/add-etl.mdx#syntax)
+
+
+## Example - add Raven ETL
+
+
+
+{`RavenEtlConfiguration configuration = new RavenEtlConfiguration();
+configuration.setName("Employees ETL");
+Transformation transformation = new Transformation();
+transformation.setName("Script #1");
+transformation.setScript("loadToEmployees (\{\\n" +
+ " Name: this.FirstName + ' ' + this.LastName,\\n" +
+ " Title: this.Title\\n" +
+ "\});");
+
+configuration.setTransforms(Arrays.asList(transformation));
+AddEtlOperation operation = new AddEtlOperation<>(configuration);
+AddEtlOperationResult result = store.maintenance().send(operation);
+`}
+
+
+
+
+
+**Secure servers**:
+
+To [connect secure RavenDB servers](../../../../server/security/authentication/certificate-management.mdx#enabling-communication-between-servers:-importing-and-exporting-certificates)
+you need to
+
+1. Export the server certificate from the source server.
+2. Install it as a client certificate on the destination server.
+
+This can be done in the RavenDB Studio -> Server Management -> [Certificates view](../../../../server/security/authentication/certificate-management.mdx#studio-certificates-management-view).
+
+
+
+
+## Example - add SQL ETL
+
+
+
+{`SqlEtlConfiguration configuration = new SqlEtlConfiguration();
+SqlEtlTable table1 = new SqlEtlTable();
+table1.setTableName("Orders");
+table1.setDocumentIdColumn("Id");
+table1.setInsertOnlyMode(false);
+
+SqlEtlTable table2 = new SqlEtlTable();
+table2.setTableName("OrderLines");
+table2.setDocumentIdColumn("OrderId");
+table2.setInsertOnlyMode(false);
+
+configuration.setSqlTables(Arrays.asList(table1, table2));
+configuration.setName("Order to SQL");
+configuration.setConnectionStringName("sql-connection-string-name");
+
+Transformation transformation = new Transformation();
+transformation.setName("Script #1");
+transformation.setCollections(Arrays.asList("Orders"));
+transformation.setScript("var orderData = \{\\n" +
+ " Id: id(this),\\n" +
+ " OrderLinesCount: this.Lines.length,\\n" +
+ " TotalCost: 0\\n" +
+ "\};\\n" +
+ "\\n" +
+ " for (var i = 0; i < this.Lines.length; i++) \{\\n" +
+ " var line = this.Lines[i];\\n" +
+ " orderData.TotalCost += line.PricePerUnit;\\n" +
+ "\\n" +
+ " // Load to SQL table 'OrderLines'\\n" +
+ " loadToOrderLines(\{\\n" +
+ " OrderId: id(this),\\n" +
+ " Qty: line.Quantity,\\n" +
+ " Product: line.Product,\\n" +
+ " Cost: line.PricePerUnit\\n" +
+ " \});\\n" +
+ " \}\\n" +
+ " orderData.TotalCost = Math.round(orderData.TotalCost * 100) / 100;\\n" +
+ "\\n" +
+ " // Load to SQL table 'Orders'\\n" +
+ " loadToOrders(orderData)");
+
+configuration.setTransforms(Arrays.asList(transformation));
+
+AddEtlOperation operation = new AddEtlOperation<>(configuration);
+
+AddEtlOperationResult result = store.maintenance().send(operation);
+`}
+
+
+
+
+
+## Example - add OLAP ETL
+
+
+
+{`OlapEtlConfiguration configuration = new OlapEtlConfiguration();
+
+configuration.setName("Orders ETL");
+configuration.setConnectionStringName("olap-connection-string-name");
+
+Transformation transformation = new Transformation();
+transformation.setName("Script #1");
+transformation.setCollections(Arrays.asList("Orders"));
+transformation.setScript("var orderDate = new Date(this.OrderedAt);\\n"+
+ "var year = orderDate.getFullYear();\\n"+
+ "var month = orderDate.getMonth();\\n"+
+ "var key = new Date(year, month);\\n"+
+ "loadToOrders(key, \{\\n"+
+ " Company : this.Company,\\n"+
+ " ShipVia : this.ShipVia\\n"+
+ "\})"
+);
+
+configuration.setTransforms(Arrays.asList(transformation));
+
+AddEtlOperation operation = new AddEtlOperation(configuration);
+
+AddEtlOperationResult result = store.maintenance().send(operation);
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`public AddEtlOperation(EtlConfiguration configuration);
+`}
+
+
+
+| Parameter | Type | Description |
+|-------------------|-----------------------|-------------------------------------------------------|
+| **configuration** | `EtlConfiguration` | ETL configuration where `T` is connection string type |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/etl/_add-etl-nodejs.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/etl/_add-etl-nodejs.mdx
new file mode 100644
index 0000000000..df3c02b283
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/etl/_add-etl-nodejs.mdx
@@ -0,0 +1,130 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use the `AddEtlOperation` method to add a new ongoing ETL task to your database.
+
+* To learn about ETL (Extract, Transfer, Load) ongoing tasks, see article [ETL Basics](../../../../server/ongoing-tasks/etl/basics.mdx).
+ To learn how to manage ETL tasks from the Studio, see [Ongoing tasks - overview](../../../../studio/database/tasks/ongoing-tasks/general-info.mdx).
+
+* In this page:
+ * [Example - add Raven ETL](../../../../client-api/operations/maintenance/etl/add-etl.mdx#example---add-raven-etl)
+ * [Example - add SQL ETL](../../../../client-api/operations/maintenance/etl/add-etl.mdx#example---add-sql-etl)
+ * [Example - add OLAP ETL](../../../../client-api/operations/maintenance/etl/add-etl.mdx#example---add-olap-etl)
+ * [Syntax](../../../../client-api/operations/maintenance/etl/add-etl.mdx#syntax)
+
+
+## Example - add Raven ETL
+
+
+
+{`const etlConfigurationRvn = Object.assign(new RavenEtlConfiguration(), \{
+ connectionStringName: "raven-connection-string-name",
+ disabled: false,
+ name: "etlRvn"
+\});
+
+const transformationRvn = \{
+ applyToAllDocuments: true,
+ name: "Script #1"
+\};
+
+etlConfigurationRvn.transforms = [transformationRvn];
+
+const operationRvn = new AddEtlOperation(etlConfigurationRvn);
+const etlResultRvn = await store.maintenance.send(operationRvn);
+`}
+
+
+
+
+
+## Example - add SQL ETL
+
+
+
+{`const transformation = \{
+ applyToAllDocuments: true,
+ name: "Script #1"
+\};
+
+const table1 = \{
+ documentIdColumn: "Id",
+ insertOnlyMode: false,
+ tableName: "Users"
+\};
+
+const etlConfigurationSql = Object.assign(new SqlEtlConfiguration(), \{
+ connectionStringName: "sql-connection-string-name",
+ disabled: false,
+ name: "etlSql",
+ transforms: [transformation],
+ sqlTables: [table1]
+\});
+
+const operationSql = new AddEtlOperation(etlConfigurationSql);
+const etlResult = await store.maintenance.send(operationSql);
+`}
+
+
+
+
+
+## Example - add OLAP ETL
+
+
+
+{`const transformationOlap = \{
+ applyToAllDocuments: true,
+ name: "Script #1"
+\};
+
+const etlConfigurationOlap = Object.assign(new OlapEtlConfiguration(), \{
+ connectionStringName: "olap-connection-string-name",
+ disabled: false,
+ name: "etlOlap",
+ transforms: [transformationOlap],
+\});
+
+const operationOlap = new AddEtlOperation(etlConfigurationOlap);
+const etlResultOlap = await store.maintenance.send(operationOlap);
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`const operation = new AddEtlOperation(etlConfiguration);
+`}
+
+
+
+| Parameter | Type | Description |
+|-------------------|---------------------------|-----------------------------------|
+| **configuration** | `EtlConfiguration` object | The ETL task configuration to add |
+
+
+
+{`class EtlConfiguration \{
+ taskId?; // number
+ name; // string
+ mentorNode?: // string
+ connectionStringName; // string
+ transforms; // Transformation[]
+ disabled?; // boolean
+ allowEtlOnNonEncryptedChannel?; // boolean
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/etl/_category_.json b/versioned_docs/version-7.1/client-api/operations/maintenance/etl/_category_.json
new file mode 100644
index 0000000000..281dfffa47
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/etl/_category_.json
@@ -0,0 +1,4 @@
+{
+ "position": 7,
+ "label": ETL,
+}
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/etl/_reset-etl-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/etl/_reset-etl-csharp.mdx
new file mode 100644
index 0000000000..a7e8977040
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/etl/_reset-etl-csharp.mdx
@@ -0,0 +1,32 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+ETL is processing documents from the point where the last batch finished. To start the processing from the very beginning you can reset the ETL by using **ResetEtlOperation**.
+
+## Syntax
+
+
+
+{`public ResetEtlOperation(string configurationName, string transformationName)
+`}
+
+
+
+| Parameters | | |
+| ------------- | ----- | ---- |
+| **configurationName** | string | ETL configuration name |
+| **transformationName** | string | Name of ETL transformation |
+
+## Example
+
+
+
+{`ResetEtlOperation operation = new ResetEtlOperation("OrdersExport", "script1");
+store.Maintenance.Send(operation);
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/etl/_reset-etl-java.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/etl/_reset-etl-java.mdx
new file mode 100644
index 0000000000..fd6345e25d
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/etl/_reset-etl-java.mdx
@@ -0,0 +1,32 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+ETL is processing documents from the point where the last batch finished. To start the processing from the very beginning you can reset the ETL by using **ResetEtlOperation**.
+
+## Syntax
+
+
+
+{`public ResetEtlOperation(String configurationName, String transformationName);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ----- | ---- |
+| **configurationName** | String | ETL configuration name |
+| **transformationName** | String | Name of ETL transformation |
+
+## Example
+
+
+
+{`ResetEtlOperation resetEtlOperation = new ResetEtlOperation("OrdersExport", "script1");
+store.maintenance().send(resetEtlOperation);
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/etl/_update-etl-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/etl/_update-etl-csharp.mdx
new file mode 100644
index 0000000000..30c5921f0e
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/etl/_update-etl-csharp.mdx
@@ -0,0 +1,56 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+You can modify ETL task by using **UpdateEtlOperation**.
+
+## Syntax
+
+
+
+{`public UpdateEtlOperation(long taskId, EtlConfiguration configuration)
+`}
+
+
+
+| Parameters | | |
+| ------------- | ----- | ---- |
+| **taskId** | long | Current ETL task ID |
+| **configuration** | `EtlConfiguration` | ETL configuration where `T` is connection string type |
+
+## Example
+
+
+
+{`// AddEtlOperationResult addEtlResult = store.Maintenance.Send(new AddEtlOperation() \{ ... \});
+
+UpdateEtlOperation operation = new UpdateEtlOperation(
+ addEtlResult.TaskId,
+ new RavenEtlConfiguration
+ \{
+ ConnectionStringName = "raven-connection-string-name",
+ Name = "Employees ETL",
+ Transforms =
+ \{
+ new Transformation
+ \{
+ Name = "Script #1",
+ Collections =
+ \{
+ "Employees"
+ \},
+ Script = @"loadToEmployees (\{
+ Name: this.FirstName + ' ' + this.LastName,
+ Title: this.Title
+ \});"
+ \}
+ \}
+ \});
+
+UpdateEtlOperationResult result = store.Maintenance.Send(operation);
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/etl/_update-etl-java.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/etl/_update-etl-java.mdx
new file mode 100644
index 0000000000..a68277bbf3
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/etl/_update-etl-java.mdx
@@ -0,0 +1,48 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+You can modify ETL task by using **UpdateEtlOperation**.
+
+## Syntax
+
+
+
+{`public UpdateEtlOperation(long taskId, EtlConfiguration configuration);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ----- | ---- |
+| **taskId** | Long | Current ETL task ID |
+| **configuration** | `EtlConfiguration` | ETL configuration where `T` is connection string type |
+
+## Example
+
+
+
+{`//store.maintenance().send(new AddEtlOperation(...));
+
+RavenEtlConfiguration etlConfiguration = new RavenEtlConfiguration();
+etlConfiguration.setConnectionStringName("raven-connection-string-name");
+etlConfiguration.setName("Employees ETL");
+Transformation transformation = new Transformation();
+transformation.setName("Script #1");
+transformation.setCollections(Arrays.asList("Employees"));
+transformation.setScript("loadToEmployees (\{\\n" +
+ " Name: this.FirstName + ' ' + this.LastName,\\n" +
+ " Title: this.Title\\n" +
+ " \});");
+
+etlConfiguration.setTransforms(Arrays.asList(transformation));
+
+UpdateEtlOperation operation = new UpdateEtlOperation<>(
+ addEtlResult.getTaskId(), etlConfiguration);
+UpdateEtlOperationResult result = store.maintenance().send(operation);
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/etl/add-etl.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/etl/add-etl.mdx
new file mode 100644
index 0000000000..42a54ed2cf
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/etl/add-etl.mdx
@@ -0,0 +1,63 @@
+---
+title: "Add ETL Operation"
+hide_table_of_contents: true
+sidebar_label: Add ETL
+sidebar_position: 0
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import AddEtlCsharp from './_add-etl-csharp.mdx';
+import AddEtlJava from './_add-etl-java.mdx';
+import AddEtlNodejs from './_add-etl-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/etl/reset-etl.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/etl/reset-etl.mdx
new file mode 100644
index 0000000000..627efe0e7a
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/etl/reset-etl.mdx
@@ -0,0 +1,37 @@
+---
+title: "Operations: How to Reset ETL"
+hide_table_of_contents: true
+sidebar_label: Reset ETL
+sidebar_position: 2
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import ResetEtlCsharp from './_reset-etl-csharp.mdx';
+import ResetEtlJava from './_reset-etl-java.mdx';
+
+export const supportedLanguages = ["csharp", "java"];
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/etl/update-etl.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/etl/update-etl.mdx
new file mode 100644
index 0000000000..3a6762c538
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/etl/update-etl.mdx
@@ -0,0 +1,37 @@
+---
+title: "Operations: How to Update ETL"
+hide_table_of_contents: true
+sidebar_label: Update ETL
+sidebar_position: 1
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import UpdateEtlCsharp from './_update-etl-csharp.mdx';
+import UpdateEtlJava from './_update-etl-java.mdx';
+
+export const supportedLanguages = ["csharp", "java"];
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/get-stats.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/get-stats.mdx
new file mode 100644
index 0000000000..05bac5ccf7
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/get-stats.mdx
@@ -0,0 +1,54 @@
+---
+title: "Get Statistics"
+hide_table_of_contents: true
+sidebar_label: Get Statistics
+sidebar_position: 0
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import GetStatsCsharp from './_get-stats-csharp.mdx';
+import GetStatsJava from './_get-stats-java.mdx';
+import GetStatsPython from './_get-stats-python.mdx';
+import GetStatsPhp from './_get-stats-php.mdx';
+import GetStatsNodejs from './_get-stats-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "python", "php", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/identities/_category_.json b/versioned_docs/version-7.1/client-api/operations/maintenance/identities/_category_.json
new file mode 100644
index 0000000000..b3c0bbf97b
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/identities/_category_.json
@@ -0,0 +1,4 @@
+{
+ "position": 3,
+ "label": Identities,
+}
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/identities/_get-identities-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/identities/_get-identities-csharp.mdx
new file mode 100644
index 0000000000..e1b4e99233
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/identities/_get-identities-csharp.mdx
@@ -0,0 +1,115 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Upon document creation, providing a collection name with a pipe symbol (`|`)
+ will cause the server to generate an ID for the new document called an **identity**.
+ E.g. `companies|`
+
+* The identity document ID is unique across the entire cluster within the database scope.
+ It is composed of the provided collection name and an integer value that is continuously incremented.
+
+* Identity values can also be managed from the Studio [identities](../../../../studio/database/documents/identities-view.mdx) view.
+
+* Use `GetIdentitiesOperation` to get the dictionary that maps collection names to their corresponding latest identity values.
+
+
+Learn more about identities in:
+
+* [Document identifier generation - Identity ID](../../../../server/kb/document-identifier-generation.mdx#identity-id)
+* [Working with document identifiers](../../../../client-api/document-identifiers/working-with-document-identifiers.mdx#identities)
+
+
+
+* In this page:
+
+ * [Get identities operation](../../../../client-api/operations/maintenance/identities/get-identities.mdx#get-identities-operation)
+ * [Syntax](../../../../client-api/operations/maintenance/identities/get-identities.mdx#syntax)
+
+
+## Get identities operation
+
+
+
+
+{`// Create a document with an identity ID:
+// ======================================
+
+using (var session = store.OpenSession())
+{
+ // Request the server to generate an identity ID for the new document. Pass:
+ // * The entity to store
+ // * The collection name with a pipe (|) postfix
+ session.Store(new Company { Name = "RavenDB" }, "companies|");
+
+ // If this is the first identity created for this collection,
+ // and if the identity value was not customized
+ // then a document with an identity ID "companies/1" will be created
+ session.SaveChanges();
+}
+
+// Get identities information:
+// ===========================
+
+// Define the get identities operation
+var getIdentitiesOp = new GetIdentitiesOperation();
+
+// Execute the operation by passing it to Maintenance.Send
+Dictionary identities = store.Maintenance.Send(getIdentitiesOp);
+
+// Results
+var latestIdentityValue = identities["companies|"]; // => value will be 1
+`}
+
+
+
+
+{`// Create a document with an identity ID:
+// ======================================
+
+using (var asyncSession = store.OpenAsyncSession())
+{
+ // Request the server to generate an identity ID for the new document. Pass:
+ // * The entity to store
+ // * The collection name with a pipe (|) postfix
+ asyncSession.StoreAsync(new Company { Name = "RavenDB" }, "companies|");
+
+ // If this is the first identity created for this collection,
+ // and if the identity value was not customized
+ // then a document with an identity ID "companies/1" will be created
+ asyncSession.SaveChangesAsync();
+}
+
+// Get identities information:
+// ===========================
+
+// Define the get identities operation
+var getIdentitiesOp = new GetIdentitiesOperation();
+
+// Execute the operation by passing it to Maintenance.SendAsync
+Dictionary identities = await store.Maintenance.SendAsync(getIdentitiesOp);
+
+// Results
+var latestIdentityValue = identities["companies|"]; // => value will be 1
+`}
+
+
+
+
+
+
+## Syntax
+
+
+
+{`public GetIdentitiesOperation();
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/identities/_get-identities-java.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/identities/_get-identities-java.mdx
new file mode 100644
index 0000000000..722f0b914f
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/identities/_get-identities-java.mdx
@@ -0,0 +1,27 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+**GetIdentitiesOperation** is used to return a dictionary which maps from the collection name to the identity value.
+
+## Syntax
+
+
+
+{`public GetIdentitiesOperation()
+`}
+
+
+
+## Example
+
+
+
+{`Map identities
+ = store.maintenance().send(new GetIdentitiesOperation());
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/identities/_get-identities-nodejs.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/identities/_get-identities-nodejs.mdx
new file mode 100644
index 0000000000..801b42934c
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/identities/_get-identities-nodejs.mdx
@@ -0,0 +1,79 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Upon document creation, providing a collection name with a pipe symbol (`|`)
+ will cause the server to generate an ID for the new document called an **identity**.
+ E.g. `companies|`
+
+* The identity document ID is unique across the entire cluster within the database scope.
+ It is composed of the provided collection name and an integer value that is continuously incremented.
+
+* Identity values can also be managed from the Studio [identities](../../../../studio/database/documents/identities-view.mdx) view.
+
+* Use `GetIdentitiesOperation` to get the dictionary that maps collection names to their corresponding latest identity values.
+
+
+Learn more about identities in:
+[Document identifier generation - Identity ID](../../../../server/kb/document-identifier-generation.mdx#identity-id)
+
+
+* In this page:
+
+ * [Get identities operation](../../../../client-api/operations/maintenance/identities/get-identities.mdx#get-identities-operation)
+ * [Syntax](../../../../client-api/operations/maintenance/identities/get-identities.mdx#syntax)
+
+
+## Get identities operation
+
+
+
+{`// Create a document with an identity ID:
+// ======================================
+
+const session = documentStore.openSession();
+const company = new Company();
+company.name = "RavenDB";
+
+// Request the server to generate an identity ID for the new document. Pass:
+// * The entity to store
+// * The collection name with a pipe (|) postfix
+await session.store(company, "companies|");
+
+// If this is the first identity created for this collection,
+// and if the identity value was not customized
+// then a document with an identity ID "companies/1" will be created
+await session.saveChanges();
+
+// Get identities information:
+// ===========================
+
+// Define the get identities operation
+const getIdentitiesOp = new GetIdentitiesOperation();
+
+// Execute the operation by passing it to maintenance.send
+const identities = await store.maintenance.send(getIdentitiesOp);
+
+// Results
+const latestIdentityValue = identities["companies|"]; // => value will be 1
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`const getIdentitiesOp = new GetIdentitiesOperation();
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/identities/_get-identities-php.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/identities/_get-identities-php.mdx
new file mode 100644
index 0000000000..a06083f2e1
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/identities/_get-identities-php.mdx
@@ -0,0 +1,86 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Upon document creation, providing a collection name with a pipe symbol (`|`)
+ will cause the server to generate an ID for the new document called an **identity**.
+ E.g. `companies|`
+
+* The identity document ID is unique across the entire cluster within the database scope.
+ It is composed of the provided collection name and an integer value that is continuously incremented.
+
+* Identity values can also be managed from the Studio [identities](../../../../studio/database/documents/identities-view.mdx) view.
+
+* Use `GetIdentitiesOperation` to get the dictionary that maps collection names to their corresponding latest identity values.
+
+
+Learn more about identities in:
+
+* [Document identifier generation - Identity ID](../../../../server/kb/document-identifier-generation.mdx#identity-id)
+* [Working with document identifiers](../../../../client-api/document-identifiers/working-with-document-identifiers.mdx#identities)
+
+
+
+* In this page:
+
+ * [Get identities operation](../../../../client-api/operations/maintenance/identities/get-identities.mdx#get-identities-operation)
+ * [Syntax](../../../../client-api/operations/maintenance/identities/get-identities.mdx#syntax)
+
+
+## Get identities operation
+
+
+
+{`// Create a document with an identity ID:
+// ======================================
+
+$session = $store->openSession();
+try \{
+ // Request the server to generate an identity ID for the new document. Pass:
+ // * The entity to store
+ // * The collection name with a pipe (|) postfix
+ $company = new Company();
+ $company->setName("RavenDB");
+ $session->store($company, "companies|");
+
+ // If this is the first identity created for this collection,
+ // and if the identity value was not customized
+ // then a document with an identity ID "companies/1" will be created
+ $session->saveChanges();
+\} finally \{
+ $session->close();
+\}
+
+// Get identities information:
+// ===========================
+
+// Define the get identities operation
+$getIdentitiesOp = new GetIdentitiesOperation();
+
+// Execute the operation by passing it to Maintenance.Send
+/** @var array $identities */
+$identities = $store->maintenance()->send($getIdentitiesOp);
+
+// Results
+$latestIdentityValue = $identities["companies|"]; // => value will be 1
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`GetIdentitiesOperation();
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/identities/_get-identities-python.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/identities/_get-identities-python.mdx
new file mode 100644
index 0000000000..0887420d06
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/identities/_get-identities-python.mdx
@@ -0,0 +1,75 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Upon document creation, providing a collection name with a pipe symbol (`|`)
+ will cause the server to generate an ID for the new document called an **identity**.
+ E.g. `companies|`
+
+* The identity document ID is unique across the entire cluster within the database scope.
+ It is composed of the provided collection name and an integer value that is continuously incremented.
+
+* Identity values can also be managed from the Studio [identities](../../../../studio/database/documents/identities-view.mdx) view.
+
+* Use `GetIdentitiesOperation` to get the dictionary that maps collection names to their corresponding latest identity values.
+
+* Learn more about identities in:
+
+ * [Document identifier generation - Identity ID](../../../../server/kb/document-identifier-generation.mdx#identity-id)
+ * [Working with document identifiers](../../../../client-api/document-identifiers/working-with-document-identifiers.mdx#identities)
+
+* In this page:
+
+ * [Get identities operation](../../../../client-api/operations/maintenance/identities/get-identities.mdx#get-identities-operation)
+ * [Syntax](../../../../client-api/operations/maintenance/identities/get-identities.mdx#syntax)
+
+
+## Get identities operation
+
+
+
+{`# Create a document with an identity ID:
+# ======================================
+with store.open_session() as session:
+ # Request the server to generate an identity ID for the new document. Pass:
+ # * The entity to store
+ # * The collection name with a pipe (|) postfix
+ session.store(Company(name="RavenDB"), "companies|")
+
+ # If this is the first identity created for this collection,
+ # and if the identity value was not customized
+ # then a document with an identity ID "companies/1" will be created
+ session.save_changes()
+
+# Get identities information:
+# ===========================
+
+# Define the get identities operation
+get_identities_op = GetIdentitiesOperation()
+
+# Execute the operation by passing it to maintenance.send
+identities = store.maintenance.send(get_identities_op)
+
+# Results
+latest_identity_value = identities["companies|"] # => value will be 1
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`class GetIdentitiesOperation(MaintenanceOperation[Dict[str, int]]): ...
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/identities/_increment-next-identity-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/identities/_increment-next-identity-csharp.mdx
new file mode 100644
index 0000000000..9965d90b1a
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/identities/_increment-next-identity-csharp.mdx
@@ -0,0 +1,114 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `NextIdentityForOperation` to increment the latest identity value set on the server for the specified collection in the database.
+
+* The next document that will be created using an identity for the collection will receive the consecutive integer value.
+
+* In this page:
+
+ * [Increment the next identity value](../../../../client-api/operations/maintenance/identities/increment-next-identity.mdx#increment-the-next-identity-value)
+ * [Syntax](../../../../client-api/operations/maintenance/identities/increment-next-identity.mdx#syntax)
+
+
+## Increment the next identity value
+
+
+
+
+{`// Create a document with an identity ID:
+// ======================================
+
+using (var session = store.OpenSession())
+{
+ // Pass a collection name that ends with a pipe '|' to create an identity ID
+ session.Store(new Company { Name = "RavenDB" }, "companies|");
+ session.SaveChanges();
+ // => Document "companies/1" will be created
+}
+
+// Increment the identity value on the server:
+// ===========================================
+
+// Define the next identity operation
+// Pass the collection name (can be with or without a pipe)
+var nextIdentityOp = new NextIdentityForOperation("companies|");
+
+// Execute the operation by passing it to Maintenance.Send
+// The latest value will be incremented to "2"
+// and the next document created with an identity will be assigned "3"
+long incrementedValue = store.Maintenance.Send(nextIdentityOp);
+
+// Create another document with an identity ID:
+// ============================================
+
+using (var session = store.OpenSession())
+{
+ session.Store(new Company { Name = "RavenDB" }, "companies|");
+ session.SaveChanges();
+ // => Document "companies/3" will be created
+}
+`}
+
+
+
+
+{`// Create a document with an identity ID:
+// ======================================
+
+using (var asyncSession = store.OpenAsyncSession())
+{
+ // Pass a collection name that ends with a pipe '|' to create an identity ID
+ asyncSession.StoreAsync(new Company { Name = "RavenDB" }, "companies|");
+ asyncSession.SaveChangesAsync();
+ // => Document "companies/1" will be created
+}
+
+// Increment the identity value on the server:
+// ===========================================
+
+// Define the next identity operation
+// Pass the collection name (can be with or without a pipe)
+var nextIdentityOp = new NextIdentityForOperation("companies|");
+
+// Execute the operation by passing it to Maintenance.SendAsync
+// The latest value will be incremented to "2"
+// and the next document created with an identity will be assigned "3"
+long incrementedValue = await store.Maintenance.SendAsync(nextIdentityOp);
+
+// Create another document with an identity ID:
+// ============================================
+
+using (var asyncSession = store.OpenAsyncSession())
+{
+ asyncSession.StoreAsync(new Company { Name = "AnotherCompany" }, "companies|");
+ asyncSession.SaveChangesAsync();
+ // => Document "companies/3" will be created
+}
+`}
+
+
+
+
+
+
+## Syntax
+
+
+
+{`public NextIdentityForOperation(string name);
+`}
+
+
+
+| Parameter | Type | Description |
+|-----------|--------|------------------------------------------------------------------------------------------------------------------------------------------------|
+| **name** | string | The collection name for which to increment the identity value. Can be with or without a pipe in the end (e.g. "companies" or "companies\|". |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/identities/_increment-next-identity-nodejs.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/identities/_increment-next-identity-nodejs.mdx
new file mode 100644
index 0000000000..80e1db1084
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/identities/_increment-next-identity-nodejs.mdx
@@ -0,0 +1,77 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `NextIdentityForOperation` to increment the latest identity value set on the server for the specified collection in the database.
+
+* The next document that will be created using an identity for the collection will receive the consecutive integer value.
+
+* In this page:
+
+ * [Increment the next identity value](../../../../client-api/operations/maintenance/identities/increment-next-identity.mdx#increment-the-next-identity-value)
+ * [Syntax](../../../../client-api/operations/maintenance/identities/increment-next-identity.mdx#syntax)
+
+
+## Increment the next identity value
+
+
+
+{`// Create a document with an identity ID:
+// ======================================
+
+const session = documentStore.openSession();
+const company = new Company();
+company.name = "RavenDB";
+
+// Pass a collection name that ends with a pipe '|' to create an identity ID
+await session.store(company, "companies|");
+
+await session.saveChanges();
+// => Document "companies/1" will be created
+
+// Increment the identity value on the server:
+// ===========================================
+
+// Define the next identity operation
+// Pass the collection name (can be with or without a pipe)
+const nextIdentityOp = new NextIdentityForOperation("companies|");
+
+// Execute the operation by passing it to maintenance.send
+// The latest value will be incremented to "2"
+// and the next document created with an identity will be assigned "3"
+const incrementedValue = await store.maintenance.send(nextIdentityOp);
+
+// Create another document with an identity ID:
+// ============================================
+
+const company = new Company();
+company.name = "AnotherComapany";
+
+await session.store(company, "companies|");
+await session.saveChanges();
+// => Document "companies/3" will be created
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`const nextIdentityOp = new NextIdentityForOperation(name);
+`}
+
+
+
+| Parameter | Type | Description |
+|-----------|--------|------------------------------------------------------------------------------------------------------------------------------------------------|
+| **name** | string | The collection name for which to increment the identity value. Can be with or without a pipe in the end (e.g. "companies" or "companies\|". |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/identities/_increment-next-identity-php.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/identities/_increment-next-identity-php.mdx
new file mode 100644
index 0000000000..a22eef236f
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/identities/_increment-next-identity-php.mdx
@@ -0,0 +1,83 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `NextIdentityForOperation` to increment the latest identity value set on the server for the specified collection in the database.
+
+* The next document that will be created using an identity for the collection will receive the consecutive integer value.
+
+* In this page:
+
+ * [Increment the next identity value](../../../../client-api/operations/maintenance/identities/increment-next-identity.mdx#increment-the-next-identity-value)
+ * [Syntax](../../../../client-api/operations/maintenance/identities/increment-next-identity.mdx#syntax)
+
+
+## Increment the next identity value
+
+
+
+{`// Create a document with an identity ID:
+// ======================================
+
+$session = $store->openSession();
+try \{
+ // Pass a collection name that ends with a pipe '|' to create an identity ID
+ $company = new Company();
+ $company->setName("RavenDB");
+ $session->store($company, "companies|");
+ $session->saveChanges();
+ // => Document "companies/1" will be created
+\} finally \{
+ $session->close();
+\}
+
+// Increment the identity value on the server:
+// ===========================================
+
+// Define the next identity operation
+// Pass the collection name (can be with or without a pipe)
+$nextIdentityOp = new NextIdentityForOperation("companies|");
+
+// Execute the operation by passing it to Maintenance.Send
+// The latest value will be incremented to "2"
+// and the next document created with an identity will be assigned "3"
+$incrementedValue = $store->maintenance()->send($nextIdentityOp);
+
+// Create another document with an identity ID:
+// ============================================
+
+$session = $store->openSession();
+try \{
+ $company = new Company();
+ $company->setName("RavenDB");
+ $session->store($company, "companies|");
+ $session->saveChanges();
+ // => Document "companies/3" will be created
+\} finally \{
+ $session->close();
+\}
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`NextIdentityForOperation(?string $name);
+`}
+
+
+
+| Parameter | Type | Description |
+|-----------|--------|-------------------------------------------------|
+| **$name** | `?string` | The name of a collection to create an ID for |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/identities/_increment-next-identity-python.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/identities/_increment-next-identity-python.mdx
new file mode 100644
index 0000000000..27abeda1d5
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/identities/_increment-next-identity-python.mdx
@@ -0,0 +1,71 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `NextIdentityForOperation` to increment the latest identity value set on the server for the specified collection in the database.
+
+* The next document that will be created using an identity for the collection will receive the consecutive integer value.
+
+* In this page:
+
+ * [Increment the next identity value](../../../../client-api/operations/maintenance/identities/increment-next-identity.mdx#increment-the-next-identity-value)
+ * [Syntax](../../../../client-api/operations/maintenance/identities/increment-next-identity.mdx#syntax)
+
+
+## Increment the next identity value
+
+
+
+{`# Create a document with an identity ID:
+# ======================================
+
+with store.open_session() as session:
+ # Pass a collection name that ends with a pipe '|' to create an identity ID
+ session.store(Company(name="RavenDB"), "companies|")
+ session.save_changes()
+ # => Document "companies/1" will be created
+
+# Increment the identity value on the server:
+# ===========================================
+
+# Define the next identity operation
+# Pass the collection name (can be with or without a pipe)
+next_identity_op = NextIdentityForOperation("companies|")
+
+# Execute the operation by passing it to Maintenance.Send
+# The latest value will be incremented to "2"
+# and the next document created with an identity will be assigned "3"
+incremented_value = store.maintenance.send(next_identity_op)
+
+# Create another document with an identity ID:
+# ============================================
+
+with store.open_session() as session:
+ session.store(Company(name="RavenDB"), "companies|")
+ session.save_changes()
+ # => Document "companies/3" will be created
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`class NextIdentityForOperation(MaintenanceOperation[int]): ...
+`}
+
+
+
+| Parameter | Type | Description |
+|-----------|--------|---------------------------------------------------------------------|
+| **MaintenanceOperation[int]** | Operation | An operation to increment the next identity |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/identities/_seed-identity-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/identities/_seed-identity-csharp.mdx
new file mode 100644
index 0000000000..689a772b0d
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/identities/_seed-identity-csharp.mdx
@@ -0,0 +1,168 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `SeedIdentityForOperation` to set the latest identity value for the specified collection.
+
+* The next document that will be created using an identity for the collection will receive the consecutive integer value.
+
+* Identity values can also be managed from the Studio [identities view](../../../../studio/database/documents/identities-view.mdx).
+
+* In this page:
+ * [Set a higher identity value](../../../../client-api/operations/maintenance/identities/seed-identity.mdx#set-a-higher-identity-value)
+ * [Force a lower identity value](../../../../client-api/operations/maintenance/identities/seed-identity.mdx#force-a-lower-identity-value)
+ * [Syntax](../../../../client-api/operations/maintenance/identities/seed-identity.mdx#syntax)
+
+
+## Set a higher identity value
+
+You can replace the latest identity value on the server with a new, **higher** number.
+
+
+
+
+
+{`// Seed a higher identity value on the server:
+// ===========================================
+
+// Define the seed identity operation. Pass:
+// * The collection name (can be with or without a pipe)
+// * The new value to set
+var seedIdentityOp = new SeedIdentityForOperation("companies|", 23);
+
+// Execute the operation by passing it to Maintenance.Send
+// The latest value on the server will be incremented to "23"
+// and the next document created with an identity will be assigned "24"
+long seededValue = store.Maintenance.Send(seedIdentityOp);
+
+// Create a document with an identity ID:
+// ======================================
+
+using (var session = store.OpenSession())
+{
+ session.Store(new Company { Name = "RavenDB" }, "companies|");
+ session.SaveChanges();
+ // => Document "companies/24" will be created
+}
+`}
+
+
+
+
+{`// Seed the identity value on the server:
+// ======================================
+
+// Define the seed identity operation. Pass:
+// * The collection name (can be with or without a pipe)
+// * The new value to set
+var seedIdentityOp = new SeedIdentityForOperation("companies|", 23);
+
+// Execute the operation by passing it to Maintenance.SendAsync
+// The latest value on the server will be incremented to "23"
+// and the next document created with an identity will be assigned "24"
+long seededValue = await store.Maintenance.SendAsync(seedIdentityOp);
+
+// Create a document with an identity ID:
+// ======================================
+
+using (var asyncSession = store.OpenAsyncSession())
+{
+ asyncSession.StoreAsync(new Company { Name = "RavenDB" }, "companies|");
+ asyncSession.SaveChangesAsync();
+ // => Document "companies/24" will be created
+}
+`}
+
+
+
+
+
+
+## Force a lower identity value
+
+* You can set the latest identity value to a number that is **lower** than the current latest value.
+
+* Before proceeding, first ensure that documents with an identity value higher than the new number do not exist.
+
+
+
+
+{`// Force a smaller identity value on the server:
+// =============================================
+
+// Define the seed identity operation. Pass:
+// * The collection name (can be with or without a pipe)
+// * The new value to set
+// * Set 'forceUpdate' to true
+var seedIdentityOp = new SeedIdentityForOperation("companies|", 5, forceUpdate: true);
+
+// Execute the operation by passing it to Maintenance.Send
+// The latest value on the server will be decremented to "5"
+// and the next document created with an identity will be assigned "6"
+long seededValue = store.Maintenance.Send(seedIdentityOp);
+
+// Create a document with an identity ID:
+// ======================================
+
+using (var session = store.OpenSession())
+{
+ session.Store(new Company { Name = "RavenDB" }, "companies|");
+ session.SaveChanges();
+ // => Document "companies/6" will be created
+}
+`}
+
+
+
+
+{`// Force a smaller identity value on the server:
+// =============================================
+
+// Define the seed identity operation. Pass:
+// * The collection name (can be with or without a pipe)
+// * The new value to set
+// * Set 'forceUpdate' to true
+var seedIdentityOp = new SeedIdentityForOperation("companies|", 5, forceUpdate: true);
+
+// Execute the operation by passing it to Maintenance.SendAsync
+// The latest value on the server will be decremented to "5"
+// and the next document created with an identity will be assigned "6"
+long seededValue = await store.Maintenance.SendAsync(seedIdentityOp);
+
+// Create a document with an identity ID:
+// ======================================
+
+using (var asyncSession = store.OpenAsyncSession())
+{
+ asyncSession.StoreAsync(new Company { Name = "RavenDB" }, "companies|");
+ asyncSession.SaveChangesAsync();
+ // => Document "companies/6" will be created
+}
+`}
+
+
+
+
+
+
+## Syntax
+
+
+
+{`public SeedIdentityForOperation(string name, long value, bool forceUpdate = false);
+`}
+
+
+
+| Parameter | Type | Description |
+|-----------------|----------|--------------------------------------------------------------------------------------------------------------------------------|
+| **name** | `string` | The collection name to seed the identity value for. Can be ended with or without a pipe (e.g. "companies" or "companies\|". |
+| **value** | `long` | The number to set as the latest identity value. |
+| **forceUpdate** | `bool` | `true` - force a new value that is lower than the latest. `false` - only a higher value can be set. |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/identities/_seed-identity-nodejs.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/identities/_seed-identity-nodejs.mdx
new file mode 100644
index 0000000000..035b520d01
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/identities/_seed-identity-nodejs.mdx
@@ -0,0 +1,108 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `SeedIdentityForOperation` to set the latest identity value for the specified collection.
+
+* The next document that will be created using an identity for the collection will receive the consecutive integer value.
+
+* Identity values can also be managed from the Studio [identities view](../../../../studio/database/documents/identities-view.mdx).
+
+* In this page:
+ * [Set a higher identity value](../../../../client-api/operations/maintenance/identities/seed-identity.mdx#set-a-higher-identity-value)
+ * [Force a lower identity value](../../../../client-api/operations/maintenance/identities/seed-identity.mdx#force-a-lower-identity-value)
+ * [Syntax](../../../../client-api/operations/maintenance/identities/seed-identity.mdx#syntax)
+
+
+## Set a higher identity value
+
+You can replace the latest identity value on the server with a new, **higher** number.
+
+
+
+{`// Seed a higher identity value on the server:
+// ===========================================
+
+// Define the seed identity operation. Pass:
+// * The collection name (can be with or without a pipe)
+// * The new value to set
+const seedIdentityOp = new SeedIdentityForOperation("companies|", 23);
+
+// Execute the operation by passing it to maintenance.send
+// The latest value on the server will be incremented to "23"
+// and the next document created with an identity will be assigned "24"
+const seededValue = await store.maintenance.send(seedIdentityOp);
+
+// Create a document with an identity ID:
+// ======================================
+
+const company = new Company();
+company.name = "RavenDB";
+
+await session.store(company, "companies|");
+await session.saveChanges();
+// => Document "companies/24" will be created
+`}
+
+
+
+
+
+## Force a lower identity value
+
+* You can set the latest identity value to a number that is **lower** than the current latest value.
+
+* Before proceeding, first ensure that documents with an identity value higher than the new number do not exist.
+
+
+
+{`// Force a smaller identity value on the server:
+// =============================================
+
+// Define the seed identity operation. Pass:
+// * The collection name (can be with or without a pipe)
+// * The new value to set
+// * Pass 'true' to force the update
+const seedIdentityOp = new SeedIdentityForOperation("companies|", 5, true);
+
+// Execute the operation by passing it to maintenance.send
+// The latest value on the server will be decremented to "5"
+// and the next document created with an identity will be assigned "6"
+const seededValue = await store.maintenance.send(seedIdentityOp);
+
+// Create a document with an identity ID:
+// ======================================
+
+const company = new Company();
+company.name = "RavenDB";
+
+await session.store(company, "companies|");
+await session.saveChanges();
+// => Document "companies/6" will be created
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`const seedIdentityOp = new SeedIdentityForOperation(name, value, forceUpdate);
+`}
+
+
+
+| Parameter | Type | Description |
+|-----------------|-----------|--------------------------------------------------------------------------------------------------------------------------------|
+| **name** | `string` | The collection name to seed the identity value for. Can be ended with or without a pipe (e.g. "companies" or "companies\|". |
+| **value** | `number` | The number to set as the latest identity value. |
+| **forceUpdate** | `boolean` | `true` - force a new value that is lower than the latest. `false` - only a higher value can be set. |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/identities/_seed-identity-php.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/identities/_seed-identity-php.mdx
new file mode 100644
index 0000000000..601c5e7f14
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/identities/_seed-identity-php.mdx
@@ -0,0 +1,117 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `SeedIdentityForOperation` to set the latest identity value for the specified collection.
+
+* The next document that will be created using an identity for the collection will receive the consecutive integer value.
+
+* Identity values can also be managed from the Studio [identities view](../../../../studio/database/documents/identities-view.mdx).
+
+* In this page:
+ * [Set a higher identity value](../../../../client-api/operations/maintenance/identities/seed-identity.mdx#set-a-higher-identity-value)
+ * [Force a lower identity value](../../../../client-api/operations/maintenance/identities/seed-identity.mdx#force-a-lower-identity-value)
+ * [Syntax](../../../../client-api/operations/maintenance/identities/seed-identity.mdx#syntax)
+
+
+## Set a higher identity value
+
+You can replace the latest identity value on the server with a new, **higher** number.
+
+
+
+
+{`// Seed a higher identity value on the server:
+// ===========================================
+
+// Define the seed identity operation. Pass:
+// * The collection name (can be with or without a pipe)
+// * The new value to set
+$seedIdentityOp = new SeedIdentityForOperation("companies|", 23);
+
+// Execute the operation by passing it to Maintenance.Send
+// The latest value on the server will be incremented to "23"
+// and the next document created with an identity will be assigned "24"
+$seededValue = $store->maintenance()->send($seedIdentityOp);
+
+// Create a document with an identity ID:
+// ======================================
+
+$session = $store->openSession();
+try \{
+ $company = new Company();
+ $company->setName("RavenDB");
+ $session->store($company, "companies|");
+ $session->saveChanges();
+ // => Document "companies/24" will be created
+\} finally \{
+ $session->close();
+\}
+`}
+
+
+
+
+
+## Force a lower identity value
+
+* You can set the latest identity value to a number that is **lower** than the current latest value.
+
+* Before proceeding, first ensure that documents with an identity value higher than the new number do not exist.
+
+
+
+{`// Force a smaller identity value on the server:
+// =============================================
+
+// Define the seed identity operation. Pass:
+// * The collection name (can be with or without a pipe)
+// * The new value to set
+// * Set 'forceUpdate' to true
+$seedIdentityOp = new SeedIdentityForOperation("companies|", 5, forceUpdate: true);
+
+// Execute the operation by passing it to Maintenance.Send
+// The latest value on the server will be decremented to "5"
+// and the next document created with an identity will be assigned "6"
+$seededValue = $store->maintenance()->send($seedIdentityOp);
+
+// Create a document with an identity ID:
+// ======================================
+
+$session = $store->openSession();
+try \{
+ $company = new Company();
+ $company->setName("RavenDB");
+ $session->store($company, "companies|");
+ $session->saveChanges();
+ // => Document "companies/6" will be created
+\} finally \{
+ $session->close();
+\}
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`SeedIdentityForOperation(string $name, int $value, bool $forceUpdate = false)
+`}
+
+
+
+| Parameter | Type | Description |
+|------------------|-----------|--------------------------------------------------------------------------------------------------------------------------------|
+| **$name** | `string ` | The collection name to seed the identity value for. Can be ended with or without a pipe (e.g. "companies" or "companies\|". |
+| **$value** | `int` | The number to set as the latest identity value. |
+| **$forceUpdate** | `bool` | `True` - force a new value that is lower than the latest. `False` - only a higher value can be set. |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/identities/_seed-identity-python.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/identities/_seed-identity-python.mdx
new file mode 100644
index 0000000000..dd1a357402
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/identities/_seed-identity-python.mdx
@@ -0,0 +1,106 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `SeedIdentityForOperation` to set the latest identity value for the specified collection.
+
+* The next document that will be created using an identity for the collection will receive the consecutive integer value.
+
+* Identity values can also be managed from the Studio [identities view](../../../../studio/database/documents/identities-view.mdx).
+
+* In this page:
+ * [Set a higher identity value](../../../../client-api/operations/maintenance/identities/seed-identity.mdx#set-a-higher-identity-value)
+ * [Force a lower identity value](../../../../client-api/operations/maintenance/identities/seed-identity.mdx#force-a-lower-identity-value)
+ * [Syntax](../../../../client-api/operations/maintenance/identities/seed-identity.mdx#syntax)
+
+
+## Set a higher identity value
+
+You can replace the latest identity value on the server with a new, **higher** number.
+
+
+
+
+{`# Seed a higher identity value on the server:
+# ===========================================
+
+# Define the seed identity operation. Pass:
+# * The collection name (can be with or without a pipe)
+# * The new value to set
+seed_identity_op = SeedIdentityForOperation("companies|", 23)
+
+# Execute the operation by passing it to maintenance.send
+# The latest value on the server will be incremented to "23"
+# and the next document created with an identity will be assigned "24"
+seeded_value = store.maintenance.send(seed_identity_op)
+
+# Create a document with an identity ID:
+# ======================================
+
+with store.open_session() as session:
+ session.store(Company(name="RavenDB"), "companies|")
+ session.save_changes()
+ # => Document "companies/24" will be created
+`}
+
+
+
+
+
+## Force a lower identity value
+
+* You can set the latest identity value to a number that is **lower** than the current latest value.
+
+* Before proceeding, first ensure that documents with an identity value higher than the new number do not exist.
+
+
+
+{`# Force a smaller identity value on the server:
+# =============================================
+
+# Define the seed identity operation. Pass:
+# * The collection name (can be with or without a pipe)
+# * The new value to set
+# * Set 'force_update' to True
+seed_identity_op = SeedIdentityForOperation("companies|", 5, force_update=True)
+
+# Execute the operation by passing it to maintenance.send
+# The latest value on the server will be decremented to "5"
+# and the next document created with an identity will be assigned "6"
+seeded_value = store.maintenance.send(seed_identity_op)
+
+# Create a document with an identity ID:
+# ======================================
+
+with store.open_session() as session:
+ session.store(Company(name="RavenDB"), "companies|")
+ session.save_changes()
+ # => Document "companies/6" will be created
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`class SeedIdentityForOperation(MaintenanceOperation[int]):
+ def __init__(self, name: str, value: int, force_update: bool = False): ...
+`}
+
+
+
+| Parameter | Type | Description |
+|------------------|--------|--------------------------------------------------------------------------------------------------------------------------------|
+| **name** | `str` | The collection name to seed the identity value for. Can be ended with or without a pipe (e.g. "companies" or "companies\|". |
+| **value** | `long` | The number to set as the latest identity value. |
+| **force_update** | `bool` | `True` - force a new value that is lower than the latest. `False` - only a higher value can be set. |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/identities/get-identities.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/identities/get-identities.mdx
new file mode 100644
index 0000000000..7ae209fe63
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/identities/get-identities.mdx
@@ -0,0 +1,58 @@
+---
+title: "Get Identities Operation"
+hide_table_of_contents: true
+sidebar_label: Get Identities
+sidebar_position: 0
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import GetIdentitiesCsharp from './_get-identities-csharp.mdx';
+import GetIdentitiesJava from './_get-identities-java.mdx';
+import GetIdentitiesPython from './_get-identities-python.mdx';
+import GetIdentitiesPhp from './_get-identities-php.mdx';
+import GetIdentitiesNodejs from './_get-identities-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "python", "php", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/identities/increment-next-identity.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/identities/increment-next-identity.mdx
new file mode 100644
index 0000000000..487d374e5c
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/identities/increment-next-identity.mdx
@@ -0,0 +1,52 @@
+---
+title: "Increment Next Identity Operation"
+hide_table_of_contents: true
+sidebar_label: Increment Next Identity
+sidebar_position: 1
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import IncrementNextIdentityCsharp from './_increment-next-identity-csharp.mdx';
+import IncrementNextIdentityPython from './_increment-next-identity-python.mdx';
+import IncrementNextIdentityPhp from './_increment-next-identity-php.mdx';
+import IncrementNextIdentityNodejs from './_increment-next-identity-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "python", "php", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/identities/seed-identity.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/identities/seed-identity.mdx
new file mode 100644
index 0000000000..2ee5cf8548
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/identities/seed-identity.mdx
@@ -0,0 +1,52 @@
+---
+title: "Seed Identity Operation"
+hide_table_of_contents: true
+sidebar_label: Seed Identity
+sidebar_position: 2
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import SeedIdentityCsharp from './_seed-identity-csharp.mdx';
+import SeedIdentityPython from './_seed-identity-python.mdx';
+import SeedIdentityPhp from './_seed-identity-php.mdx';
+import SeedIdentityNodejs from './_seed-identity-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "python", "php", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_category_.json b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_category_.json
new file mode 100644
index 0000000000..e8c599ce5a
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_category_.json
@@ -0,0 +1,4 @@
+{
+ "position": 4,
+ "label": Indexes,
+}
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_delete-index-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_delete-index-csharp.mdx
new file mode 100644
index 0000000000..f49ea78a8e
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_delete-index-csharp.mdx
@@ -0,0 +1,59 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `DeleteIndexOperation` to remove an index from the database.
+
+* The index will be deleted from all the database-group nodes.
+
+* In this page:
+ * [Delete index example](../../../../client-api/operations/maintenance/indexes/delete-index.mdx#delete-index-example)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/delete-index.mdx#syntax)
+
+
+## Delete index example
+
+
+
+
+{`// Define the delete index operation, specify the index name
+var deleteIndexOp = new DeleteIndexOperation("Orders/Totals");
+
+// Execute the operation by passing it to Maintenance.Send
+store.Maintenance.Send(deleteIndexOp);
+`}
+
+
+
+
+{`// Define the delete index errors operation, specify the index name
+var deleteIndexOp = new DeleteIndexOperation("Orders/Totals");
+
+// Execute the operation by passing it to Maintenance.SendAsync
+await store.Maintenance.SendAsync(deleteIndexOp);
+`}
+
+
+
+
+
+
+## Syntax
+
+
+
+{`public DeleteIndexOperation(string indexName)
+`}
+
+
+
+| Parameter | Type | Description |
+|---------------|----------|-------------------------|
+| **indexName** | `string` | Name of index to delete |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_delete-index-errors-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_delete-index-errors-csharp.mdx
new file mode 100644
index 0000000000..590fc502e5
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_delete-index-errors-csharp.mdx
@@ -0,0 +1,104 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `DeleteIndexErrorsOperation` to delete indexing errors.
+
+* The operation will be executed only on the server node that is defined by the current [client-configuration](../../../../client-api/configuration/load-balance/overview.mdx#client-logic-for-choosing-a-node).
+
+* Deleting the errors will only **clear the index errors**.
+ An index with an 'Error state' will Not be set back to 'Normal state'.
+
+* To just get index errors see [get index errors](../../../../client-api/operations/maintenance/indexes/get-index-errors.mdx).
+
+* In this page:
+ * [Delete errors from all indexes](../../../../client-api/operations/maintenance/indexes/delete-index-errors.mdx#delete-errors-from-all-indexes)
+ * [Delete errors from specific indexes](../../../../client-api/operations/maintenance/indexes/delete-index-errors.mdx#delete-errors-from-specific-indexes)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/delete-index-errors.mdx#syntax)
+
+
+## Delete errors from all indexes
+
+
+
+
+{`// Define the delete index errors operation
+var deleteIndexErrorsOp = new DeleteIndexErrorsOperation();
+
+// Execute the operation by passing it to Maintenance.Send
+store.Maintenance.Send(deleteIndexErrorsOp);
+
+// All errors from ALL indexes are deleted
+`}
+
+
+
+
+{`// Define the delete index errors operation
+var deleteIndexErrorsOp = new DeleteIndexErrorsOperation();
+
+// Execute the operation by passing it to Maintenance.SendAsync
+await store.Maintenance.SendAsync(deleteIndexErrorsOp);
+
+// All errors from ALL indexes are deleted
+`}
+
+
+
+
+
+
+## Delete errors from specific indexes
+
+
+
+
+{`// Define the delete index errors operation from specific indexes
+var deleteIndexErrorsOp = new DeleteIndexErrorsOperation(new[] { "Orders/Totals" });
+
+// Execute the operation by passing it to Maintenance.Send
+// An exception will be thrown if any of the specified indexes do not exist
+store.Maintenance.Send(deleteIndexErrorsOp);
+
+// Only errors from index "Orders/Totals" are deleted
+`}
+
+
+
+
+{`// Define the delete index errors operation from specific indexes
+var deleteIndexErrorsOp = new DeleteIndexErrorsOperation(new[] { "Orders/Totals" });
+
+// Execute the operation by passing it to Maintenance.SendAsync
+// An exception will be thrown if any of the specified indexes do not exist
+await store.Maintenance.SendAsync(deleteIndexErrorsOp);
+
+// Only errors from index "Orders/Totals" are deleted
+`}
+
+
+
+
+
+
+## Syntax
+
+
+
+{`// Available overloads:
+public DeleteIndexErrorsOperation() // Delete errors from all indexes
+public DeleteIndexErrorsOperation(string[] indexNames) // Delete errors from specific indexes
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **indexNames** | `string[]` | List of index names to delete errors from. An exception is thrown if any of the specified indexes does not exist. |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_delete-index-errors-nodejs.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_delete-index-errors-nodejs.mdx
new file mode 100644
index 0000000000..b959949b05
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_delete-index-errors-nodejs.mdx
@@ -0,0 +1,75 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `DeleteIndexErrorsOperation` to delete indexing errors.
+
+* The operation will be executed only on the server node that is defined by the current [client-configuration](../../../../client-api/configuration/load-balance/overview.mdx#client-logic-for-choosing-a-node).
+
+* Deleting the errors will only **clear the index errors**.
+ An index with an 'Error state' will Not be set back to 'Normal state'.
+
+* To just get index errors see [get index errors](../../../../client-api/operations/maintenance/indexes/get-index-errors.mdx).
+
+* In this page:
+ * [Delete errors from all indexes](../../../../client-api/operations/maintenance/indexes/delete-index-errors.mdx#delete-errors-from-all-indexes)
+ * [Delete errors from specific indexes](../../../../client-api/operations/maintenance/indexes/delete-index-errors.mdx#delete-errors-from-specific-indexes)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/delete-index-errors.mdx#syntax)
+
+
+## Delete errors from all indexes
+
+
+
+{`// Define the delete index errors operation
+const deleteIndexErrorsOp = new DeleteIndexErrorsOperation();
+
+// Execute the operation by passing it to maintenance.send
+await store.maintenance.send(deleteIndexErrorsOp);
+
+// All errors from ALL indexes are deleted
+`}
+
+
+
+
+
+## Delete errors from specific indexes
+
+
+
+{`// Define the delete index errors operation from specific indexes
+const deleteIndexErrorsOp = new DeleteIndexErrorsOperation(["Orders/Totals"]);
+
+// Execute the operation by passing it to maintenance.send
+// An exception will be thrown if any of the specified indexes do not exist
+await store.maintenance.send(deleteIndexErrorsOp);
+
+// Only errors from index "Orders/Totals" are deleted
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`// Available overloads:
+const deleteIndexErrorsOp = new DeleteIndexErrorsOperation(); // Delete errors from all indexes
+const deleteIndexErrorsOp = new DeleteIndexErrorsOperation(indexNames); // Delete errors from specific indexes
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **indexNames** | `string[]` | List of index names to delete errors from. An exception is thrown if any of the specified indexes does not exist. |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_delete-index-errors-php.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_delete-index-errors-php.mdx
new file mode 100644
index 0000000000..dfcc7fc261
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_delete-index-errors-php.mdx
@@ -0,0 +1,75 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `DeleteIndexErrorsOperation` to delete indexing errors.
+
+* The operation will be executed only on the server node that is defined by the current [client-configuration](../../../../client-api/configuration/load-balance/overview.mdx#client-logic-for-choosing-a-node).
+
+* Deleting the errors will only **clear the index errors**.
+ An index with an 'Error state' will Not be set back to 'Normal state'.
+
+* To just get index errors see [get index errors](../../../../client-api/operations/maintenance/indexes/get-index-errors.mdx).
+
+* In this page:
+ * [Delete errors from all indexes](../../../../client-api/operations/maintenance/indexes/delete-index-errors.mdx#delete-errors-from-all-indexes)
+ * [Delete errors from specific indexes](../../../../client-api/operations/maintenance/indexes/delete-index-errors.mdx#delete-errors-from-specific-indexes)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/delete-index-errors.mdx#syntax)
+
+
+## Delete errors from all indexes
+
+
+
+{`// Define the delete index errors operation
+$deleteIndexErrorsOp = new DeleteIndexErrorsOperation();
+
+// Execute the operation by passing it to Maintenance.Send
+$store->maintenance()->send($deleteIndexErrorsOp);
+
+// All errors from ALL indexes are deleted
+`}
+
+
+
+
+
+## Delete errors from specific indexes
+
+
+
+{`// Define the delete index errors operation from specific indexes
+$deleteIndexErrorsOp = new DeleteIndexErrorsOperation([ "Orders/Totals" ]);
+
+// Execute the operation by passing it to Maintenance.Send
+// An exception will be thrown if any of the specified indexes do not exist
+$store->maintenance()->send($deleteIndexErrorsOp);
+
+// Only errors from index "Orders/Totals" are deleted
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`// Available overloads:
+DeleteIndexErrorsOperation() // Delete errors from all indexes
+DeleteIndexErrorsOperation(StringArray|array|string $indexNames) // Delete errors from specific indexes
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **$indexNames** | `StringArray` `array` `string` | List of index names to delete errors from. An exception is thrown if any of the specified indexes does not exist. |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_delete-index-errors-python.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_delete-index-errors-python.mdx
new file mode 100644
index 0000000000..2b29180f6a
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_delete-index-errors-python.mdx
@@ -0,0 +1,74 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `DeleteIndexErrorsOperation` to delete indexing errors.
+
+* The operation will be executed only on the server node that is defined by the current [client-configuration](../../../../client-api/configuration/load-balance/overview.mdx#client-logic-for-choosing-a-node).
+
+* Deleting the errors will only **clear the index errors**.
+ An index with an 'Error state' will Not be set back to 'Normal state'.
+
+* To just get index errors see [get index errors](../../../../client-api/operations/maintenance/indexes/get-index-errors.mdx).
+
+* In this page:
+ * [Delete errors from all indexes](../../../../client-api/operations/maintenance/indexes/delete-index-errors.mdx#delete-errors-from-all-indexes)
+ * [Delete errors from specific indexes](../../../../client-api/operations/maintenance/indexes/delete-index-errors.mdx#delete-errors-from-specific-indexes)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/delete-index-errors.mdx#syntax)
+
+
+## Delete errors from all indexes
+
+
+
+{`# Define the delete index errors operation
+delete_index_errors_op = DeleteIndexErrorsOperation()
+
+# Execute the operation by passing it to maintenance.send
+store.maintenance.send(delete_index_errors_op)
+
+# All errors from ALL indexes are deleted
+`}
+
+
+
+
+
+## Delete errors from specific indexes
+
+
+
+{`# Define the delete index errors operation from specific indexes
+delete_index_errors_op = DeleteIndexErrorsOperation(["Orders/Totals"])
+
+# Execute the operation by passing it to maintenance.send
+# An exception will be thrown if any of the specified indexes do not exist
+store.maintenance.send(delete_index_errors_op)
+
+# Only errors from index "Orders/Totals" are deleted
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`class DeleteIndexErrorsOperation(VoidMaintenanceOperation):
+ def __init__(self, index_names: List[str] = None): ...
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **index_names** | `List[str]` | List of index names to delete errors from. An exception is thrown if any of the specified indexes does not exist. |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_delete-index-java.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_delete-index-java.mdx
new file mode 100644
index 0000000000..171b5da458
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_delete-index-java.mdx
@@ -0,0 +1,30 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+**DeleteIndexOperation** is used to remove an index from a database.
+
+## Syntax
+
+
+
+{`public DeleteIndexOperation(String indexName)
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **indexName** | String | name of an index to delete |
+
+## Example
+
+
+
+{`store.maintenance().send(new DeleteIndexOperation("Orders/Totals"));
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_delete-index-nodejs.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_delete-index-nodejs.mdx
new file mode 100644
index 0000000000..7b376410f2
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_delete-index-nodejs.mdx
@@ -0,0 +1,47 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `DeleteIndexOperation` to remove an index from the database.
+
+* The index will be deleted from all the database-group nodes.
+
+* In this page:
+ * [Delete index example](../../../../client-api/operations/maintenance/indexes/delete-index.mdx#delete-index-example)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/delete-index.mdx#syntax)
+
+
+## Delete index example
+
+
+
+{`// Define the delete index operation, specify the index name
+const deleteIndexOp = new DeleteIndexOperation("Orders/Totals");
+
+// Execute the operation by passing it to maintenance.send
+await store.maintenance.send(deleteIndexOp);
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`const deleteIndexOp = new DeleteIndexOperation(indexName);
+`}
+
+
+
+| Parameters | Type | Description |
+|- | - | - |
+| **indexName** | `string` | Name of index to delete |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_delete-index-php.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_delete-index-php.mdx
new file mode 100644
index 0000000000..c78c6613fd
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_delete-index-php.mdx
@@ -0,0 +1,47 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `DeleteIndexOperation` to remove an index from the database.
+
+* The index will be deleted from all the database-group nodes.
+
+* In this page:
+ * [Delete index example](../../../../client-api/operations/maintenance/indexes/delete-index.mdx#delete-index-example)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/delete-index.mdx#syntax)
+
+
+## Delete index example
+
+
+
+{`// Define the delete index operation, specify the index name
+$deleteIndexOp = new DeleteIndexOperation("Orders/Totals");
+
+// Execute the operation by passing it to Maintenance.Send
+$store->maintenance()->send($deleteIndexOp);
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`DeleteIndexOperation(?string $indexName)
+`}
+
+
+
+| Parameters | Type | Description |
+|- | - | - |
+| **$indexName** | `?string` | Name of index to delete |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_delete-index-python.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_delete-index-python.mdx
new file mode 100644
index 0000000000..3317174ee4
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_delete-index-python.mdx
@@ -0,0 +1,48 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `DeleteIndexOperation` to remove an index from the database.
+
+* The index will be deleted from all the database-group nodes.
+
+* In this page:
+ * [Delete index example](../../../../client-api/operations/maintenance/indexes/delete-index.mdx#delete-index-example)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/delete-index.mdx#syntax)
+
+
+## Delete index example
+
+
+
+{`# Define the delete index operation, specify the index name
+delete_index_op = DeleteIndexOperation("Orders/Totals")
+
+# Execute the operation by passing it to maintenance.send
+store.maintenance.send(delete_index_op)
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`class DeleteIndexOperation(VoidMaintenanceOperation):
+ def __init__(self, index_name: str): ...
+`}
+
+
+
+| Parameters | Type | Description |
+|- | - | - |
+| **index_name** | `str` | Name of index to delete |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_disable-index-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_disable-index-csharp.mdx
new file mode 100644
index 0000000000..4175b2ec49
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_disable-index-csharp.mdx
@@ -0,0 +1,169 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* You can **disable a specific index** by either of the following:
+ * From the Client API - using `DisableIndexOperation`
+ * From Studio - see [indexes list view](../../../../studio/database/indexes/indexes-list-view.mdx#indexes-list-view---actions)
+ * Via the file system
+
+* To learn how to enable a disabled index, see [Enable index operation](../../../../client-api/operations/maintenance/indexes/enable-index.mdx).
+
+* In this page:
+
+ * [Overview](../../../../client-api/operations/maintenance/indexes/disable-index.mdx#overview)
+ * [Which node is the index disabled on?](../../../../client-api/operations/maintenance/indexes/disable-index.mdx#which-node-is-the-index-disabled-on)
+ * [What happens when the index is disabled?](../../../../client-api/operations/maintenance/indexes/disable-index.mdx#what-happens-when-the-index-is-disabled)
+
+ * [Disable index from the Client API](../../../../client-api/operations/maintenance/indexes/disable-index.mdx#disable-index-from-the-client-api)
+ * [Disable index - single node](../../../../client-api/operations/maintenance/indexes/disable-index.mdx#disable-index---single-node)
+ * [Disable index - cluster wide](../../../../client-api/operations/maintenance/indexes/disable-index.mdx#disable-index---cluster-wide)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/disable-index.mdx#syntax)
+
+ * [Disable index manually via the file system](../../../../client-api/operations/maintenance/indexes/disable-index.mdx#disable-index-manually-via-the-file-system)
+
+
+## Overview
+
+#### Which node is the index disabled on?
+
+* The index can be disabled either:
+ * On a single node, or
+ * Cluster wide - on all database-group nodes.
+
+* When disabling the index from the **client API** on a single node:
+ The index will be disabled on the [preferred node](../../../../client-api/configuration/load-balance/overview.mdx#the-preferred-node) only, and Not on all the database-group nodes.
+
+* When disabling an index from the **Studio** [indexes list](../../../../studio/database/indexes/indexes-list-view.mdx#indexes-list-view---actions) view:
+ The index will be disabled on the local node the browser is opened on, even if it is Not the preferred node.
+
+* When disabling the index [manually](../../../../client-api/operations/maintenance/indexes/disable-index.mdx#disable-index-via-the-file-system):
+ The index will be disabled on the [preferred node](../../../../client-api/configuration/load-balance/overview.mdx#the-preferred-node) only, and Not on all the database-group nodes.
+#### What happens when the index is disabled?
+
+* No indexing will be done by a disabled index on the node where index is disabled.
+ However, new data will be indexed by the index on other database-group nodes where it is not disabled.
+
+* You can still query the index,
+ but results may be stale when querying a node on which the index was disabled.
+
+* Disabling an index is a **persistent operation**:
+ * The index will remain disabled even after restarting the server or after [disabling/enabling](../../../../client-api/operations/server-wide/toggle-databases-state.mdx) the database.
+ * To only pause the index and resume after a restart see: [pause index operation](../../../../client-api/operations/maintenance/indexes/stop-index.mdx).
+
+
+
+## Disable index from the Client API
+
+#### Disable index - single node:
+
+
+
+
+{`// Define the disable index operation
+// Use this overload to disable on a single node
+var disableIndexOp = new DisableIndexOperation("Orders/Totals");
+
+// Execute the operation by passing it to Maintenance.Send
+store.Maintenance.Send(disableIndexOp);
+
+// At this point, the index is disabled only on the 'preferred node'
+// New data will not be indexed on this node only
+`}
+
+
+
+
+{`// Define the disable index operation
+// Use this overload to disable on a single node
+var disableIndexOp = new DisableIndexOperation("Orders/Totals");
+
+// Execute the operation by passing it to Maintenance.SendAsync
+await store.Maintenance.SendAsync(disableIndexOp);
+
+// At this point, the index is disabled only on the 'preferred node'
+// New data will not be indexed on this node only
+`}
+
+
+
+#### Disable index - cluster wide:
+
+
+
+
+{`// Define the disable index operation
+// Pass 'true' to disable the index on all nodes in the database-group
+var disableIndexOp = new DisableIndexOperation("Orders/Totals", true);
+
+// Execute the operation by passing it to Maintenance.Send
+store.Maintenance.Send(disableIndexOp);
+
+// At this point, the index is disabled on ALL nodes
+// New data will not be indexed
+`}
+
+
+
+
+{`// Define the disable index operation
+// Pass 'true' to disable the index on all nodes in the database-group
+var disableIndexOp = new DisableIndexOperation("Orders/Totals", true);
+
+// Execute the operation by passing it to Maintenance.SendAsync
+await store.Maintenance.SendAsync(disableIndexOp);
+
+// At this point, the index is disabled on ALL nodes
+// New data will not be indexed
+`}
+
+
+
+#### Syntax:
+
+
+
+{`// Available overloads:
+public DisableIndexOperation(string indexName)
+public DisableIndexOperation(string indexName, bool clusterWide)
+`}
+
+
+
+| Parameter | Type | Description |
+|-----------------|----------|--------------------------------------------------------------------------------------------------------------------------|
+| **indexName** | `string` | Name of index to disable |
+| **clusterWide** | `bool` | `true` - Disable index on all database-group nodes `false` - Disable index only on a single node (the preferred node) |
+
+
+
+## Disable index manually via the file system
+
+* It may sometimes be useful to disable an index manually, through the file system.
+ For example, a faulty index may load before [DisableIndexOperation](../../../../client-api/operations/maintenance/indexes/disable-index.mdx#disableindexoperation) gets a chance to disable it.
+ Manually disabling the index will ensure that the index is not loaded.
+
+* To **manually disable** an index:
+
+ * Place a file named `disable.marker` in the [index directory](../../../../server/storage/directory-structure.mdx).
+ Indexes are kept under the database directory, each index in a directory whose name is similar to the index's.
+ * The `disable.marker` file can be empty,
+ and can be created by any available method, e.g. using the File Explorer, a terminal, or code.
+
+* Attempting to use a manually disabled index will generate the following exception:
+
+ Unable to open index: '{IndexName}',
+ it has been manually disabled via the file: '{disableMarkerPath}'.
+ To re-enable, remove the disable.marker file and enable indexing.`
+
+* To **enable** a manually disabled index:
+
+ * First, remove the `disable.marker` file from the index directory.
+ * Then, enable the index by any of the options described in: [How to enable an index](../../../../client-api/operations/maintenance/indexes/enable-index.mdx#how-to-enable-an-index).
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_disable-index-java.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_disable-index-java.mdx
new file mode 100644
index 0000000000..dade7167e1
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_disable-index-java.mdx
@@ -0,0 +1,37 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+The **DisableIndexOperation** is used to turn the indexing off for a given index. Querying a `disabled` index is allowed, but it may return stale results.
+
+
+Unlike [StopIndex](../../../../client-api/operations/maintenance/indexes/stop-index.mdx) or [StopIndexing](../../../../client-api/operations/maintenance/indexes/stop-indexing.mdx) disable index is a persistent operation, so the index remains disabled even after a server restart.
+
+
+
+## Syntax
+
+
+
+{`public DisableIndexOperation(String indexName)
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **indexName** | String | name of an index to disable indexing |
+
+## Example
+
+
+
+{`store.maintenance().send(new DisableIndexOperation("Orders/Totals"));
+// index is disabled at this point, new data won't be indexed
+// but you can still query on this index
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_disable-index-nodejs.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_disable-index-nodejs.mdx
new file mode 100644
index 0000000000..32e0687c21
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_disable-index-nodejs.mdx
@@ -0,0 +1,135 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* You can **disable a specific index** by either of the following:
+ * From the Client API - using `DisableIndexOperation`
+ * From Studio - see [indexes list view](../../../../studio/database/indexes/indexes-list-view.mdx#indexes-list-view---actions)
+ * Via the file system
+
+* To learn how to enable a disabled index, see [Enable index operation](../../../../client-api/operations/maintenance/indexes/enable-index.mdx).
+
+* In this page:
+
+ * [Overview](../../../../client-api/operations/maintenance/indexes/disable-index.mdx#overview)
+ * [Which node is the index disabled on?](../../../../client-api/operations/maintenance/indexes/disable-index.mdx#which-node-is-the-index-disabled-on)
+ * [What happens when the index is disabled?](../../../../client-api/operations/maintenance/indexes/disable-index.mdx#what-happens-when-the-index-is-disabled)
+
+ * [Disable index from the Client API](../../../../client-api/operations/maintenance/indexes/disable-index.mdx#disable-index-from-the-client-api)
+ * [Disable index - single node](../../../../client-api/operations/maintenance/indexes/disable-index.mdx#disable-index---single-node)
+ * [Disable index - cluster wide](../../../../client-api/operations/maintenance/indexes/disable-index.mdx#disable-index---cluster-wide)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/disable-index.mdx#syntax)
+
+ * [Disable index manually via the file system](../../../../client-api/operations/maintenance/indexes/disable-index.mdx#disable-index-manually-via-the-file-system)
+
+
+## Overview
+
+#### Which node is the index disabled on?
+
+* The index can be disabled either:
+ * On a single node, or
+ * Cluster wide - on all database-group nodes.
+
+* When disabling the index from the **client API** on a single node:
+ The index will be disabled on the [preferred node](../../../../client-api/configuration/load-balance/overview.mdx#the-preferred-node) only, and Not on all the database-group nodes.
+
+* When disabling an index from the **Studio** [indexes list](../../../../studio/database/indexes/indexes-list-view.mdx#indexes-list-view---actions) view:
+ The index will be disabled on the local node the browser is opened on, even if it is Not the preferred node.
+
+* When disabling the index [manually](../../../../client-api/operations/maintenance/indexes/disable-index.mdx#disable-index-via-the-file-system):
+ The index will be disabled on the [preferred node](../../../../client-api/configuration/load-balance/overview.mdx#the-preferred-node) only, and Not on all the database-group nodes.
+#### What happens when the index is disabled?
+
+* No indexing will be done by a disabled index on the node where index is disabled.
+ However, new data will be indexed by the index on other database-group nodes where it is not disabled.
+
+* You can still query the index,
+ but results may be stale when querying a node on which the index was disabled.
+
+* Disabling an index is a **persistent operation**:
+ * The index will remain disabled even after restarting the server or after [disabling/enabling](../../../../client-api/operations/server-wide/toggle-databases-state.mdx) the database.
+ * To only pause the index and resume after a restart see: [pause index operation](../../../../client-api/operations/maintenance/indexes/stop-index.mdx).
+
+
+
+## Disable index from the Client API
+
+#### Disable index - single node:
+
+
+
+{`// Define the disable index operation
+// Use this overload to disable on a single node
+const disableIndexOp = new DisableIndexOperation("Orders/Totals");
+
+// Execute the operation by passing it to maintenance.send
+await documentStore.maintenance.send(disableIndexOp);
+
+// At this point, the index is disabled only on the 'preferred node'
+// New data will not be indexed on this node only
+`}
+
+
+#### Disable index - cluster wide:
+
+
+
+{`// Define the disable index operation
+// Pass 'true' to disable the index on all nodes in the database-group
+const disableIndexOp = new DisableIndexOperation("Orders/Totals", true);
+
+// Execute the operation by passing it to maintenance.send
+await documentStore.maintenance.send(disableIndexOp);
+
+// At this point, the index is disabled on ALL nodes
+// New data will not be indexed
+`}
+
+
+#### Syntax:
+
+
+
+{`const disableIndexOp = new DisableIndexOperation(indexName, clusterWide = false);
+`}
+
+
+
+| Parameter | Type | Description |
+|-----------------|-----------|--------------------------------------------------------------------------------------------------------------------------|
+| **indexName** | `string` | Name of index to disable |
+| **clusterWide** | `boolean` | `true` - Disable index on all database-group nodes `false` - Disable index only on a single node (the preferred node) |
+
+
+
+## Disable index manually via the file system
+
+* It may sometimes be useful to disable an index manually, through the file system.
+ For example, a faulty index may load before [DisableIndexOperation](../../../../client-api/operations/maintenance/indexes/disable-index.mdx#disableindexoperation) gets a chance to disable it.
+ Manually disabling the index will ensure that the index is not loaded.
+
+* To **manually disable** an index:
+
+ * Place a file named `disable.marker` in the [index directory](../../../../server/storage/directory-structure.mdx).
+ Indexes are kept under the database directory, each index in a directory whose name is similar to the index's.
+ * The `disable.marker` file can be empty,
+ and can be created by any available method, e.g. using the File Explorer, a terminal, or code.
+
+* Attempting to use a manually disabled index will generate the following exception:
+
+ Unable to open index: '{IndexName}',
+ it has been manually disabled via the file: '{disableMarkerPath}'.
+ To re-enable, remove the disable.marker file and enable indexing.`
+
+* To **enable** a manually disabled index:
+
+ * First, remove the `disable.marker` file from the index directory.
+ * Then, enable the index by any of the options described in: [How to enable an index](../../../../client-api/operations/maintenance/indexes/enable-index.mdx#how-to-enable-an-index).
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_disable-index-php.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_disable-index-php.mdx
new file mode 100644
index 0000000000..2fd7f208e2
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_disable-index-php.mdx
@@ -0,0 +1,135 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* You can **disable a specific index** by either of the following:
+ * From the Client API - using `DisableIndexOperation`
+ * From Studio - see [indexes list view](../../../../studio/database/indexes/indexes-list-view.mdx#indexes-list-view---actions)
+ * Via the file system
+
+* To learn how to enable a disabled index, see [Enable index operation](../../../../client-api/operations/maintenance/indexes/enable-index.mdx).
+
+* In this page:
+
+ * [Overview](../../../../client-api/operations/maintenance/indexes/disable-index.mdx#overview)
+ * [Which node is the index disabled on?](../../../../client-api/operations/maintenance/indexes/disable-index.mdx#which-node-is-the-index-disabled-on)
+ * [What happens when the index is disabled?](../../../../client-api/operations/maintenance/indexes/disable-index.mdx#what-happens-when-the-index-is-disabled)
+
+ * [Disable index from the Client API](../../../../client-api/operations/maintenance/indexes/disable-index.mdx#disable-index-from-the-client-api)
+ * [Disable index - single node](../../../../client-api/operations/maintenance/indexes/disable-index.mdx#disable-index---single-node)
+ * [Disable index - cluster wide](../../../../client-api/operations/maintenance/indexes/disable-index.mdx#disable-index---cluster-wide)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/disable-index.mdx#syntax)
+
+ * [Disable index manually via the file system](../../../../client-api/operations/maintenance/indexes/disable-index.mdx#disable-index-manually-via-the-file-system)
+
+
+## Overview
+
+#### Which node is the index disabled on?
+
+* The index can be disabled either:
+ * On a single node, or
+ * Cluster wide - on all database-group nodes.
+
+* When disabling the index from the **client API** on a single node:
+ The index will be disabled on the [preferred node](../../../../client-api/configuration/load-balance/overview.mdx#the-preferred-node) only, and Not on all the database-group nodes.
+
+* When disabling an index from the **Studio** [indexes list](../../../../studio/database/indexes/indexes-list-view.mdx#indexes-list-view---actions) view:
+ The index will be disabled on the local node the browser is opened on, even if it is Not the preferred node.
+
+* When disabling the index [manually](../../../../client-api/operations/maintenance/indexes/disable-index.mdx#disable-index-via-the-file-system):
+ The index will be disabled on the [preferred node](../../../../client-api/configuration/load-balance/overview.mdx#the-preferred-node) only, and Not on all the database-group nodes.
+#### What happens when the index is disabled?
+
+* No indexing will be done by a disabled index on the node where index is disabled.
+ However, new data will be indexed by the index on other database-group nodes where it is not disabled.
+
+* You can still query the index,
+ but results may be stale when querying a node on which the index was disabled.
+
+* Disabling an index is a **persistent operation**:
+ * The index will remain disabled even after restarting the server or after [disabling/enabling](../../../../client-api/operations/server-wide/toggle-databases-state.mdx) the database.
+ * To only pause the index and resume after a restart see: [pause index operation](../../../../client-api/operations/maintenance/indexes/stop-index.mdx).
+
+
+
+## Disable index from the Client API
+
+#### Disable index - single node:
+
+
+
+{`// Define the disable index operation
+// Use this overload to disable on a single node
+$disableIndexOp = new DisableIndexOperation("Orders/Totals");
+
+// Execute the operation by passing it to Maintenance.Send
+$store->maintenance()->send($disableIndexOp);
+
+// At this point, the index is disabled only on the 'preferred node'
+// New data will not be indexed on this node only
+`}
+
+
+#### Disable index - cluster wide:
+
+
+
+{`// Define the disable index operation
+// Pass 'true' to disable the index on all nodes in the database-group
+$disableIndexOp = new DisableIndexOperation("Orders/Totals", true);
+
+// Execute the operation by passing it to Maintenance.Send
+$store->maintenance()->send($disableIndexOp);
+
+// At this point, the index is disabled on ALL nodes
+// New data will not be indexed
+`}
+
+
+#### Syntax:
+
+
+
+{`DisableIndexOperation(?string $indexName, bool $clusterWide = false)
+`}
+
+
+
+| Parameter | Type | Description |
+|------------------|--------|--------------------------------------------------------------------------------------------------------------------------|
+| **$indexName** | `?string` | Name of index to disable |
+| **$clusterWide** | `bool` | `true` - Disable index on all database-group nodes `false` - Disable index only on a single node (the preferred node) |
+
+
+
+## Disable index manually via the file system
+
+* It may sometimes be useful to disable an index manually, through the file system.
+ For example, a faulty index may load before [DisableIndexOperation](../../../../client-api/operations/maintenance/indexes/disable-index.mdx#disableindexoperation) gets a chance to disable it.
+ Manually disabling the index will ensure that the index is not loaded.
+
+* To **manually disable** an index:
+
+ * Place a file named `disable.marker` in the [index directory](../../../../server/storage/directory-structure.mdx).
+ Indexes are kept under the database directory, each index in a directory whose name is similar to the index's.
+ * The `disable.marker` file can be empty,
+ and can be created by any available method, e.g. using the File Explorer, a terminal, or code.
+
+* Attempting to use a manually disabled index will generate the following exception:
+
+ Unable to open index: '{IndexName}',
+ it has been manually disabled via the file: '{disableMarkerPath}'.
+ To re-enable, remove the disable.marker file and enable indexing.`
+
+* To **enable** a manually disabled index:
+
+ * First, remove the `disable.marker` file from the index directory.
+ * Then, enable the index by any of the options described in: [How to enable an index](../../../../client-api/operations/maintenance/indexes/enable-index.mdx#how-to-enable-an-index).
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_disable-index-python.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_disable-index-python.mdx
new file mode 100644
index 0000000000..76bd9e8128
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_disable-index-python.mdx
@@ -0,0 +1,136 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* You can **disable a specific index** by either of the following:
+ * From the Client API - using `DisableIndexOperation`
+ * From Studio - see [indexes list view](../../../../studio/database/indexes/indexes-list-view.mdx#indexes-list-view---actions)
+ * Via the file system
+
+* To learn how to enable a disabled index, see [Enable index operation](../../../../client-api/operations/maintenance/indexes/enable-index.mdx).
+
+* In this page:
+
+ * [Overview](../../../../client-api/operations/maintenance/indexes/disable-index.mdx#overview)
+ * [Which node is the index disabled on?](../../../../client-api/operations/maintenance/indexes/disable-index.mdx#which-node-is-the-index-disabled-on)
+ * [What happens when the index is disabled?](../../../../client-api/operations/maintenance/indexes/disable-index.mdx#what-happens-when-the-index-is-disabled)
+
+ * [Disable index from the Client API](../../../../client-api/operations/maintenance/indexes/disable-index.mdx#disable-index-from-the-client-api)
+ * [Disable index - single node](../../../../client-api/operations/maintenance/indexes/disable-index.mdx#disable-index---single-node)
+ * [Disable index - cluster wide](../../../../client-api/operations/maintenance/indexes/disable-index.mdx#disable-index---cluster-wide)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/disable-index.mdx#syntax)
+
+ * [Disable index manually via the file system](../../../../client-api/operations/maintenance/indexes/disable-index.mdx#disable-index-manually-via-the-file-system)
+
+
+## Overview
+
+#### Which node is the index disabled on?
+
+* The index can be disabled either:
+ * On a single node, or
+ * Cluster wide - on all database-group nodes.
+
+* When disabling the index from the **client API** on a single node:
+ The index will be disabled on the [preferred node](../../../../client-api/configuration/load-balance/overview.mdx#the-preferred-node) only, and Not on all the database-group nodes.
+
+* When disabling an index from the **Studio** [indexes list](../../../../studio/database/indexes/indexes-list-view.mdx#indexes-list-view---actions) view:
+ The index will be disabled on the local node the browser is opened on, even if it is Not the preferred node.
+
+* When disabling the index [manually](../../../../client-api/operations/maintenance/indexes/disable-index.mdx#disable-index-via-the-file-system):
+ The index will be disabled on the [preferred node](../../../../client-api/configuration/load-balance/overview.mdx#the-preferred-node) only, and Not on all the database-group nodes.
+#### What happens when the index is disabled?
+
+* No indexing will be done by a disabled index on the node where index is disabled.
+ However, new data will be indexed by the index on other database-group nodes where it is not disabled.
+
+* You can still query the index,
+ but results may be stale when querying a node on which the index was disabled.
+
+* Disabling an index is a **persistent operation**:
+ * The index will remain disabled even after restarting the server or after [disabling/enabling](../../../../client-api/operations/server-wide/toggle-databases-state.mdx) the database.
+ * To only pause the index and resume after a restart see: [pause index operation](../../../../client-api/operations/maintenance/indexes/stop-index.mdx).
+
+
+
+## Disable index from the Client API
+
+#### Disable index - single node:
+
+
+
+{`# Define the disable index operation
+# Use this args set to disable on a single node
+disable_index_op = DisableIndexOperation("Orders/Totals")
+
+# Execute the operation by passing it to maintenance.send
+store.maintenance.send(disable_index_op)
+
+# At this point, the index is disabled only on the 'preferred node'
+# New data will not be indexed on this node only
+`}
+
+
+#### Disable index - cluster wide:
+
+
+
+{`# Define the disable index operation
+# Pass 'True' to disable the index on all nodes in the database-group
+disable_index_op = DisableIndexOperation("Orders/Totals", True)
+
+# Execute the operation by passing it to maintenance.send
+store.maintenance.send(disable_index_op)
+
+# At this point, the index is disabled on ALL nodes
+# New data will not be indexed
+`}
+
+
+#### Syntax:
+
+
+
+{`class DisableIndexOperation(VoidMaintenanceOperation):
+ def __init__(self, index_name: str, cluster_wide: bool = False): ...
+`}
+
+
+
+| Parameter | Type | Description |
+|------------------|--------|--------------------------------------------------------------------------------------------------------------------------|
+| **index_name** | `str` | Name of index to disable |
+| **cluster_wide** | `bool` | `True` - Disable index on all database-group nodes `False` - Disable index only on a single node (the preferred node) |
+
+
+
+## Disable index manually via the file system
+
+* It may sometimes be useful to disable an index manually, through the file system.
+ For example, a faulty index may load before [DisableIndexOperation](../../../../client-api/operations/maintenance/indexes/disable-index.mdx#disableindexoperation) gets a chance to disable it.
+ Manually disabling the index will ensure that the index is not loaded.
+
+* To **manually disable** an index:
+
+ * Place a file named `disable.marker` in the [index directory](../../../../server/storage/directory-structure.mdx).
+ Indexes are kept under the database directory, each index in a directory whose name is similar to the index's.
+ * The `disable.marker` file can be empty,
+ and can be created by any available method, e.g. using the File Explorer, a terminal, or code.
+
+* Attempting to use a manually disabled index will generate the following exception:
+
+ Unable to open index: '{IndexName}',
+ it has been manually disabled via the file: '{disableMarkerPath}'.
+ To re-enable, remove the disable.marker file and enable indexing.`
+
+* To **enable** a manually disabled index:
+
+ * First, remove the `disable.marker` file from the index directory.
+ * Then, enable the index by any of the options described in: [How to enable an index](../../../../client-api/operations/maintenance/indexes/enable-index.mdx#how-to-enable-an-index).
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_enable-index-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_enable-index-csharp.mdx
new file mode 100644
index 0000000000..b1f14e24d2
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_enable-index-csharp.mdx
@@ -0,0 +1,134 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* When an index is enabled, indexing will take place, and new data will be indexed.
+
+* To learn how to disable an index, see [disable index](../../../../client-api/operations/maintenance/indexes/disable-index.mdx).
+
+* In this page:
+ * [How to enable an index](../../../../client-api/operations/maintenance/indexes/enable-index.mdx#how-to-enable-an-index)
+ * [Enable index from the Client API](../../../../client-api/operations/maintenance/indexes/enable-index.mdx#enable-index-from-the-client-api)
+ * [Enable index - single node](../../../../client-api/operations/maintenance/indexes/enable-index.mdx#enable-index---single-node)
+ * [Enable index - cluster wide](../../../../client-api/operations/maintenance/indexes/enable-index.mdx#enable-index---cluster-wide)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/enable-index.mdx#syntax)
+
+
+## How to enable an index
+
+* **From the Client API**:
+ Use `EnableIndexOperation` to enable the index from the Client API.
+ The index can be enabled:
+ * On a single node.
+ * Cluster wide, on all database-group nodes.
+
+* **From Studio**:
+ To enable the index from Studio go to the [indexes list view](../../../../studio/database/indexes/indexes-list-view.mdx#indexes-list-view---actions).
+
+* **Reset index**:
+ [Resetting](../../../../client-api/operations/maintenance/indexes/reset-index.mdx) a disabled index will re-enable the index
+ locally, on the node that the reset operation was performed on.
+
+* **Modify index definition**:
+ Modifying the index definition will also re-enable the normal operation of the index.
+
+* The above methods can also be used to enable an index that was
+ [disabled via the file system](../../../../client-api/operations/maintenance/indexes/disable-index.mdx#disable-index-manually-via-the-file-system),
+ after removing the `disable.marker` file.
+
+
+
+## Enable index from the Client API
+
+#### Enable index - single node:
+
+* With this option, the index will be enabled on the [preferred node](../../../../client-api/configuration/load-balance/overview.mdx#the-preferred-node) only.
+ The preferred node is simply the first node in the [database group topology](../../../../studio/database/settings/manage-database-group.mdx).
+
+* Note: When enabling an index from [Studio](../../../../studio/database/indexes/indexes-list-view.mdx#indexes-list-view---actions),
+ the index will be enabled on the local node the browser is opened on, even if it is Not the preferred node.
+
+
+
+
+{`// Define the enable index operation
+// Use this overload to enable on a single node
+var enableIndexOp = new EnableIndexOperation("Orders/Totals");
+
+// Execute the operation by passing it to Maintenance.Send
+store.Maintenance.Send(enableIndexOp);
+
+// At this point, the index is enabled on the 'preferred node'
+// New data will be indexed on this node
+`}
+
+
+
+
+{`// Define the enable index operation
+// Use this overload to enable on a single node
+var enableIndexOp = new EnableIndexOperation("Orders/Totals");
+
+// Execute the operation by passing it to Maintenance.SendAsync
+await store.Maintenance.SendAsync(enableIndexOp);
+
+// At this point, the index is enabled on the 'preferred node'
+// New data will be indexed on this node
+`}
+
+
+
+#### Enable index - cluster wide:
+
+
+
+
+{`// Define the enable index operation
+// Pass 'true' to enable the index on all nodes in the database-group
+var enableIndexOp = new EnableIndexOperation("Orders/Totals", true);
+
+// Execute the operation by passing it to Maintenance.Send
+store.Maintenance.Send(enableIndexOp);
+
+// At this point, the index is enabled on ALL nodes
+// New data will be indexed
+`}
+
+
+
+
+{`// Define the enable index operation
+// Pass 'true' to enable the index on all nodes in the database-group
+var enableIndexOp = new EnableIndexOperation("Orders/Totals", true);
+
+// Execute the operation by passing it to Maintenance.SendAsync
+await store.Maintenance.SendAsync(enableIndexOp);
+
+// At this point, the index is enabled on ALL nodes
+// New data will be indexed
+`}
+
+
+
+#### Syntax:
+
+
+
+{`// Available overloads:
+public EnableIndexOperation(string indexName)
+public EnableIndexOperation(string indexName, bool clusterWide)
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **indexName** | `string` | Name of index to enable |
+| **clusterWide** | `bool` | `true` - Enable index on all database-group nodes `false` - Enable index only on a single node (the preferred node) |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_enable-index-java.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_enable-index-java.mdx
new file mode 100644
index 0000000000..0d8dad2ba3
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_enable-index-java.mdx
@@ -0,0 +1,31 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+The **EnableIndexOperation** is used to turn on the indexing for a given index.
+
+
+## Syntax
+
+
+
+{`public EnableIndexOperation(String indexName)
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **indexName** | String | name of an index to enable indexing |
+
+## Example
+
+
+
+{`store.maintenance().send(new EnableIndexOperation("Orders/Totals"));
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_enable-index-nodejs.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_enable-index-nodejs.mdx
new file mode 100644
index 0000000000..2606b0e7a1
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_enable-index-nodejs.mdx
@@ -0,0 +1,100 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* When an index is enabled, indexing will take place, and new data will be indexed.
+
+* To learn how to disable an index, see [disable index](../../../../client-api/operations/maintenance/indexes/disable-index.mdx).
+
+* In this page:
+ * [How to enable an index](../../../../client-api/operations/maintenance/indexes/enable-index.mdx#how-to-enable-an-index)
+ * [Enable index from the Client API](../../../../client-api/operations/maintenance/indexes/enable-index.mdx#enable-index-from-the-client-api)
+ * [Enable index - single node](../../../../client-api/operations/maintenance/indexes/enable-index.mdx#enable-index---single-node)
+ * [Enable index - cluster wide](../../../../client-api/operations/maintenance/indexes/enable-index.mdx#enable-index---cluster-wide)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/enable-index.mdx#syntax)
+
+
+## How to enable an index
+
+* **From the Client API**:
+ Use `EnableIndexOperation` to enable the index from the Client API.
+ The index can be enabled:
+ * On a single node.
+ * Cluster wide, on all database-group nodes.
+
+* **From Studio**:
+ To enable the index from Studio go to the [indexes list view](../../../../studio/database/indexes/indexes-list-view.mdx#indexes-list-view---actions).
+
+* **Reset index**:
+ [Resetting](../../../../client-api/operations/maintenance/indexes/reset-index.mdx) a disabled index will re-enable the index
+ locally, on the node that the reset operation was performed on.
+
+* **Modify index definition**:
+ Modifying the index definition will also re-enable the normal operation of the index.
+
+* The above methods can also be used to enable an index that was
+ [disabled via the file system](../../../../client-api/operations/maintenance/indexes/disable-index.mdx#disable-index-manually-via-the-file-system),
+ after removing the `disable.marker` file.
+
+
+
+## Enable index from the Client API
+
+#### Enable index - single node:
+
+* With this option, the index will be enabled on the [preferred node](../../../../client-api/configuration/load-balance/overview.mdx#the-preferred-node) only.
+ The preferred node is simply the first node in the [database group topology](../../../../studio/database/settings/manage-database-group.mdx).
+
+* Note: When enabling an index from [Studio](../../../../studio/database/indexes/indexes-list-view.mdx#indexes-list-view---actions),
+ the index will be enabled on the local node the browser is opened on, even if it is Not the preferred node.
+
+
+
+{`// Define the enable index operation
+// Use this overload to enable on a single node
+const enableIndexOp = new EnableIndexOperation("Orders/Totals");
+
+// Execute the operation by passing it to maintenance.send
+await documentStore.maintenance.send(enableIndexOp);
+
+// At this point, the index is enabled on the 'preferred node'
+// New data will be indexed on this node
+`}
+
+
+#### Enable index - cluster wide:
+
+
+
+{`// Define the enable index operation
+// Pass 'true' to enable the index on all nodes in the database-group
+const enableIndexOp = new EnableIndexOperation("Orders/Totals", true);
+
+// Execute the operation by passing it to maintenance.send
+await documentStore.maintenance.send(enableIndexOp);
+
+// At this point, the index is enabled on ALL nodes
+// New data will be indexed
+`}
+
+
+#### Syntax:
+
+
+
+{`const enableIndexOp = new EnableIndexOperation(indexName, clusterWide = false);
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **indexName** | `string` | Name of index to enable |
+| **clusterWide** | `bool` | `true` - Enable index on all database-group nodes `false` - Enable index only on a single node (the preferred node) |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_enable-index-php.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_enable-index-php.mdx
new file mode 100644
index 0000000000..9f584de502
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_enable-index-php.mdx
@@ -0,0 +1,101 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* When an index is enabled, indexing will take place, and new data will be indexed.
+
+* To learn how to disable an index, see [disable index](../../../../client-api/operations/maintenance/indexes/disable-index.mdx).
+
+* In this page:
+ * [How to enable an index](../../../../client-api/operations/maintenance/indexes/enable-index.mdx#how-to-enable-an-index)
+ * [Enable index from the Client API](../../../../client-api/operations/maintenance/indexes/enable-index.mdx#enable-index-from-the-client-api)
+ * [Enable index - single node](../../../../client-api/operations/maintenance/indexes/enable-index.mdx#enable-index---single-node)
+ * [Enable index - cluster wide](../../../../client-api/operations/maintenance/indexes/enable-index.mdx#enable-index---cluster-wide)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/enable-index.mdx#syntax)
+
+
+## How to enable an index
+
+* **From the Client API**:
+ Use `EnableIndexOperation` to enable the index from the Client API.
+ The index can be enabled:
+ * On a single node.
+ * Cluster wide, on all database-group nodes.
+
+* **From Studio**:
+ To enable the index from Studio go to the [indexes list view](../../../../studio/database/indexes/indexes-list-view.mdx#indexes-list-view---actions).
+
+* **Reset index**:
+ [Resetting](../../../../client-api/operations/maintenance/indexes/reset-index.mdx) a disabled index will re-enable the index
+ locally, on the node that the reset operation was performed on.
+
+* **Modify index definition**:
+ Modifying the index definition will also re-enable the normal operation of the index.
+
+* The above methods can also be used to enable an index that was
+ [disabled via the file system](../../../../client-api/operations/maintenance/indexes/disable-index.mdx#disable-index-manually-via-the-file-system),
+ after removing the `disable.marker` file.
+
+
+
+## Enable index from the Client API
+
+#### Enable index - single node:
+
+* With this option, the index will be enabled on the [preferred node](../../../../client-api/configuration/load-balance/overview.mdx#the-preferred-node) only.
+ The preferred node is simply the first node in the [database group topology](../../../../studio/database/settings/manage-database-group.mdx).
+
+* Note: When enabling an index from [Studio](../../../../studio/database/indexes/indexes-list-view.mdx#indexes-list-view---actions),
+ the index will be enabled on the local node the browser is opened on, even if it is Not the preferred node.
+
+
+
+{`// Define the enable index operation
+// Use this overload to enable on a single node
+$enableIndexOp = new EnableIndexOperation("Orders/Totals");
+
+// Execute the operation by passing it to Maintenance.Send
+$store->maintenance()->send($enableIndexOp);
+
+// At this point, the index is enabled on the 'preferred node'
+// New data will be indexed on this node
+`}
+
+
+#### Enable index - cluster wide:
+
+
+
+{`// Define the enable index operation
+// Pass 'true' to enable the index on all nodes in the database-group
+$enableIndexOp = new EnableIndexOperation("Orders/Totals", true);
+
+// Execute the operation by passing it to Maintenance.Send
+$store->maintenance()->send($enableIndexOp);
+
+// At this point, the index is enabled on ALL nodes
+// New data will be indexed
+`}
+
+
+#### Syntax:
+
+
+
+{`// Available overloads:
+EnableIndexOperation(?string $indexName, bool clusterWide = false)
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **$indexName** | `?string` | Name of index to enable |
+| **$clusterWide** | `bool` | `true` - Enable index on all database-group nodes `false` - Enable index only on a single node (the preferred node) |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_enable-index-python.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_enable-index-python.mdx
new file mode 100644
index 0000000000..f82f14199e
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_enable-index-python.mdx
@@ -0,0 +1,101 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* When an index is enabled, indexing will take place, and new data will be indexed.
+
+* To learn how to disable an index, see [disable index](../../../../client-api/operations/maintenance/indexes/disable-index.mdx).
+
+* In this page:
+ * [How to enable an index](../../../../client-api/operations/maintenance/indexes/enable-index.mdx#how-to-enable-an-index)
+ * [Enable index from the Client API](../../../../client-api/operations/maintenance/indexes/enable-index.mdx#enable-index-from-the-client-api)
+ * [Enable index - single node](../../../../client-api/operations/maintenance/indexes/enable-index.mdx#enable-index---single-node)
+ * [Enable index - cluster wide](../../../../client-api/operations/maintenance/indexes/enable-index.mdx#enable-index---cluster-wide)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/enable-index.mdx#syntax)
+
+
+## How to enable an index
+
+* **From the Client API**:
+ Use `EnableIndexOperation` to enable the index from the Client API.
+ The index can be enabled:
+ * On a single node.
+ * Cluster wide, on all database-group nodes.
+
+* **From Studio**:
+ To enable the index from Studio go to the [indexes list view](../../../../studio/database/indexes/indexes-list-view.mdx#indexes-list-view---actions).
+
+* **Reset index**:
+ [Resetting](../../../../client-api/operations/maintenance/indexes/reset-index.mdx) a disabled index will re-enable the index
+ locally, on the node that the reset operation was performed on.
+
+* **Modify index definition**:
+ Modifying the index definition will also re-enable the normal operation of the index.
+
+* The above methods can also be used to enable an index that was
+ [disabled via the file system](../../../../client-api/operations/maintenance/indexes/disable-index.mdx#disable-index-manually-via-the-file-system),
+ after removing the `disable.marker` file.
+
+
+
+## Enable index from the Client API
+
+#### Enable index - single node:
+
+* With this option, the index will be enabled on the [preferred node](../../../../client-api/configuration/load-balance/overview.mdx#the-preferred-node) only.
+ The preferred node is simply the first node in the [database group topology](../../../../studio/database/settings/manage-database-group.mdx).
+
+* Note: When enabling an index from [Studio](../../../../studio/database/indexes/indexes-list-view.mdx#indexes-list-view---actions),
+ the index will be enabled on the local node the browser is opened on, even if it is Not the preferred node.
+
+
+
+{`# Define the enable index operation
+# Use this args set to enable on a single node
+enable_index_op = EnableIndexOperation("Orders/Totals")
+
+# Execute the operation by passing it to maintenance.send
+store.maintenance.send(enable_index_op)
+
+# At this point, the index is enabled only on the 'preferred node'
+# New data will not be indexed on this node only
+`}
+
+
+#### Enable index - cluster wide:
+
+
+
+{`# Define the enable index operation
+# Pass 'True' to enable the index on all nodes in the database-group
+enable_index_op = EnableIndexOperation("Orders/Totals", True)
+
+# Execute the operation by passing it to maintenance.send
+store.maintenance.send(enable_index_op)
+
+# At this point, the index is enabled on ALL nodes
+# New data will not be indexed
+`}
+
+
+#### Syntax:
+
+
+
+{`class EnableIndexOperation(VoidMaintenanceOperation):
+ def __init__(self, index_name: str, cluster_wide: bool = False): ...
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **index_name** | `str` | Name of index to enable |
+| **cluster_wide** | `bool` | `True` - Enable index on all database-group nodes `False` - Enable index only on a single node (the preferred node) |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-index-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-index-csharp.mdx
new file mode 100644
index 0000000000..b9156e0f96
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-index-csharp.mdx
@@ -0,0 +1,80 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `GetIndexOperation` to retrieve an index definition from the database.
+
+* The operation will execute on the node defined by the [client configuration](../../../../client-api/configuration/load-balance/overview.mdx#client-logic-for-choosing-a-node).
+ However, the index definition returned is taken from the database record,
+ which is common to all the database-group nodes.
+ i.e., an index state change done only on a local node is not reflected.
+
+* To get the index state on the local node use `GetIndexStatisticsOperation`.
+
+* In this page:
+ * [Get Index example](../../../../client-api/operations/maintenance/indexes/get-index.mdx#get-index-example)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/get-index.mdx#syntax)
+
+
+## Get Index example
+
+
+
+
+{`// Define the get index operation, pass the index name
+var getIndexOp = new GetIndexOperation("Orders/Totals");
+
+// Execute the operation by passing it to Maintenance.Send
+IndexDefinition index = store.Maintenance.Send(getIndexOp);
+
+// Access the index definition
+var state = index.State;
+var lockMode = index.LockMode;
+var deploymentMode = index.DeploymentMode;
+// etc.
+`}
+
+
+
+
+{`// Define the get index operation, pass the index name
+var getIndexOp = new GetIndexOperation("Orders/Totals");
+
+// Execute the operation by passing it to Maintenance.SendAsync
+IndexDefinition index = await store.Maintenance.SendAsync(getIndexOp);
+
+// Access the index definition
+var state = index.State;
+var lockMode = index.LockMode;
+var deploymentMode = index.DeploymentMode;
+// etc.
+`}
+
+
+
+
+
+
+## Syntax
+
+
+
+{`public GetIndexOperation(string indexName)
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **indexName** | `string` | Name of index to get |
+
+| Return value of `store.Maintenance.Send(getIndexOp)` | Description |
+|- | - |
+| `IndexDefinition` | An instance of class [IndexDefinition](../../../../client-api/operations/maintenance/indexes/put-indexes.mdx#indexdefinition) |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-index-errors-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-index-errors-csharp.mdx
new file mode 100644
index 0000000000..8e9ec1804f
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-index-errors-csharp.mdx
@@ -0,0 +1,138 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `GetIndexErrorsOperation` to get errors encountered during indexing.
+
+* The index errors will be retrieved only from the server node defined by the current [client-configuration](../../../../client-api/configuration/load-balance/overview.mdx#client-logic-for-choosing-a-node).
+
+* To learn about clearing index errors, see [delete index errors](../../../../client-api/operations/maintenance/indexes/delete-index-errors.mdx).
+
+* In this page:
+ * [Get errors for all indexes](../../../../client-api/operations/maintenance/indexes/get-index-errors.mdx#get-errors-for-all-indexes)
+ * [Get errors for specific indexes](../../../../client-api/operations/maintenance/indexes/get-index-errors.mdx#get-errors-for-specific-indexes)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/get-index-errors.mdx#syntax)
+
+
+## Get errors for all indexes
+
+
+
+
+{`// Define the get index errors operation
+var getIndexErrorsOp = new GetIndexErrorsOperation();
+
+// Execute the operation by passing it to Maintenance.Send
+IndexErrors[] indexErrors = store.Maintenance.Send(getIndexErrorsOp);
+
+// indexErrors will contain errors for ALL indexes
+`}
+
+
+
+
+{`// Define the get index errors operation
+var getIndexErrorsOp = new GetIndexErrorsOperation();
+
+// Execute the operation by passing it to Maintenance.SendAsync
+IndexErrors[] indexErrors = await store.Maintenance.SendAsync(getIndexErrorsOp);
+
+// indexErrors will contain errors for ALL indexes
+`}
+
+
+
+
+
+
+## Get errors for specific indexes
+
+
+
+
+{`// Define the get index errors operation for specific indexes
+var getIndexErrorsOp = new GetIndexErrorsOperation(new[] { "Orders/Totals" });
+
+// Execute the operation by passing it to Maintenance.Send
+// An exception will be thrown if any of the specified indexes do not exist
+IndexErrors[] indexErrors = store.Maintenance.Send(getIndexErrorsOp);
+
+// indexErrors will contain errors only for index "Orders/Totals"
+`}
+
+
+
+
+{`// Define the get index errors operation for specific indexes
+var getIndexErrorsOp = new GetIndexErrorsOperation(new[] { "Orders/Totals" });
+
+// Execute the operation by passing it to Maintenance.SendAsync
+// An exception will be thrown if any of the specified indexes do not exist
+IndexErrors[] indexErrors = await store.Maintenance.SendAsync(getIndexErrorsOp);
+
+// indexErrors will contain errors only for index "Orders/Totals"
+`}
+
+
+
+
+
+
+## Syntax
+
+
+
+{`// Available overloads:
+public GetIndexErrorsOperation() // Get errors for all indexes
+public GetIndexErrorsOperation(string[] indexNames) // Get errors for specific indexes
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **indexNames** | `string[]` | List of index names to get errors for |
+
+| Return value of `store.Maintenance.Send(getIndexErrorsOp)`| Description |
+| - | - |
+| `IndexErrors[]` | List of `IndexErrors` classes - see definition below. An exception is thrown if any of the specified indexes doesn't exist. |
+
+
+
+{`public class IndexErrors
+\{
+ public string Name \{ get; set; \} // Index name
+ public IndexingError[] Errors \{ get; set; \} // List of errors for this index
+\}
+`}
+
+
+
+
+
+{`public class IndexingError
+\{
+ // The error message
+ public string Error \{ get; set; \}
+
+ // Time of error
+ public DateTime Timestamp \{ get; set; \}
+
+ // If Action is 'Map' - field will contain the document ID
+ // If Action is 'Reduce' - field will contain the Reduce key value
+ // For all other Actions - field will be null
+ public string Document \{ get; set; \}
+
+ // Area where error has occurred, e.g. Map/Reduce/Analyzer/Memory/etc.
+ public string Action \{ get; set; \}
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-index-errors-java.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-index-errors-java.mdx
new file mode 100644
index 0000000000..d1a2ec291d
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-index-errors-java.mdx
@@ -0,0 +1,122 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+**GetIndexErrorsOperation** is used to return errors encountered during document indexing.
+
+## Syntax
+
+
+
+{`public GetIndexErrorsOperation()
+
+public GetIndexErrorsOperation(String[] indexNames)
+`}
+
+
+
+
+
+{`public class IndexErrors \{
+ private String name;
+ private IndexingError[] errors;
+
+ public IndexErrors() \{
+ errors = new IndexingError[0];
+ \}
+
+ public String getName() \{
+ return name;
+ \}
+
+ public void setName(String name) \{
+ this.name = name;
+ \}
+
+ public IndexingError[] getErrors() \{
+ return errors;
+ \}
+
+ public void setErrors(IndexingError[] errors) \{
+ this.errors = errors;
+ \}
+\}
+`}
+
+
+
+
+
+{`public class IndexingError \{
+
+ private String error;
+ private Date timestamp;
+ private String document;
+ private String action;
+
+ public String getError() \{
+ return error;
+ \}
+
+ public void setError(String error) \{
+ this.error = error;
+ \}
+
+ public Date getTimestamp() \{
+ return timestamp;
+ \}
+
+ public void setTimestamp(Date timestamp) \{
+ this.timestamp = timestamp;
+ \}
+
+ public String getDocument() \{
+ return document;
+ \}
+
+ public void setDocument(String document) \{
+ this.document = document;
+ \}
+
+ public String getAction() \{
+ return action;
+ \}
+
+ public void setAction(String action) \{
+ this.action = action;
+ \}
+\}
+`}
+
+
+
+| Return Value | | |
+| ------------- | ----- | ---- |
+| **Name** | String | Index name |
+| **Errors** | IndexingError\[\] | List of indexing errors |
+
+## Example I
+
+
+
+{`// gets errors for all indexes
+IndexErrors[] indexErrors
+ = store.maintenance().send(new GetIndexErrorsOperation());
+`}
+
+
+
+## Example II
+
+
+
+{`// gets errors only for 'Orders/Totals' index
+IndexErrors[] indexErrors
+ = store.maintenance()
+ .send(new GetIndexErrorsOperation(new String[]\{"Orders/Totals"\}));
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-index-errors-nodejs.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-index-errors-nodejs.mdx
new file mode 100644
index 0000000000..72d3c66457
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-index-errors-nodejs.mdx
@@ -0,0 +1,110 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `GetIndexErrorsOperation` to get errors encountered during indexing.
+
+* The index errors will be retrieved only from the server node defined by the current [client-configuration](../../../../client-api/configuration/load-balance/overview.mdx#client-logic-for-choosing-a-node).
+
+* To learn about clearing index errors, see [delete index errors](../../../../client-api/operations/maintenance/indexes/delete-index-errors.mdx).
+
+* In this page:
+ * [Get errors for all indexes](../../../../client-api/operations/maintenance/indexes/get-index-errors.mdx#get-errors-for-all-indexes)
+ * [Get errors for specific indexes](../../../../client-api/operations/maintenance/indexes/get-index-errors.mdx#get-errors-for-specific-indexes)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/get-index-errors.mdx#syntax)
+
+
+## Get errors for all indexes
+
+
+
+{`// Define the get index errors operation
+const getIndexErrorsOp = new GetIndexErrorsOperation();
+
+// Execute the operation by passing it to maintenance.send
+const indexErrors = await store.maintenance.send(getIndexErrorsOp);
+
+// indexErrors will contain errors for ALL indexes
+`}
+
+
+
+
+
+## Get errors for specific indexes
+
+
+
+{`// Define the get index errors operation for specific indexes
+const getIndexErrorsOp = new GetIndexErrorsOperation(["Orders/Totals"]);
+
+// Execute the operation by passing it to maintenance.send
+// An exception will be thrown if any of the specified indexes do not exist
+const indexErrors = await store.maintenance.send(getIndexErrorsOp);
+
+// indexErrors will contain errors only for index "Orders/Totals"
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`// Available overloads:
+const getIndexErrorsOp = new GetIndexErrorsOperation(); // Get errors for all indexes
+const getIndexErrorsOp = new GetIndexErrorsOperation(indexNames); // Get errors for specific indexes
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **indexNames** | `string[]` | List of index names to get errors for |
+
+| Return value of `store.maintenance.send(getIndexErrorsOp)`| Description |
+| - | - |
+| `object[]` | List of 'index errors' objects - see definition below. An exception is thrown if any of the specified indexes doesn't exist. |
+
+
+
+
+{`// An 'index errors' object:
+\{
+ name, // Index name
+ errors // List of 'error objects' for this index
+\}
+`}
+
+
+
+
+{`// An 'error object':
+\{
+ // The error message
+ error,
+
+ // Time of error
+ timestamp,
+
+ // If Action is 'Map' - field will contain the document ID
+ // If Action is 'Reduce' - field will contain the Reduce key value
+ // For all other Actions - field will be null
+ document,
+
+ // Area where error has occurred, e.g. Map/Reduce/Analyzer/Memory/etc.
+ action
+\}
+`}
+
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-index-errors-php.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-index-errors-php.mdx
new file mode 100644
index 0000000000..29fbe0e3e1
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-index-errors-php.mdx
@@ -0,0 +1,117 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `GetIndexErrorsOperation` to get errors encountered during indexing.
+
+* The index errors will be retrieved only from the server node defined by the current [client-configuration](../../../../client-api/configuration/load-balance/overview.mdx#client-logic-for-choosing-a-node).
+
+* To learn about clearing index errors, see [delete index errors](../../../../client-api/operations/maintenance/indexes/delete-index-errors.mdx).
+
+* In this page:
+ * [Get errors for all indexes](../../../../client-api/operations/maintenance/indexes/get-index-errors.mdx#get-errors-for-all-indexes)
+ * [Get errors for specific indexes](../../../../client-api/operations/maintenance/indexes/get-index-errors.mdx#get-errors-for-specific-indexes)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/get-index-errors.mdx#syntax)
+
+
+## Get errors for all indexes
+
+
+
+{`// Define the get index errors operation
+$getIndexErrorsOp = new GetIndexErrorsOperation();
+
+// Execute the operation by passing it to maintenance.send
+/** @var IndexErrorsArray $indexErrors */
+$indexErrors = $store->maintenance()->send($getIndexErrorsOp);
+
+// indexErrors will contain errors for ALL indexes
+`}
+
+
+
+
+
+## Get errors for specific indexes
+
+
+
+{`// Define the get index errors operation for specific indexes
+$getIndexErrorsOp = new GetIndexErrorsOperation([ "Orders/Totals" ]);
+
+// Execute the operation by passing it to Maintenance.Send
+// An exception will be thrown if any of the specified indexes do not exist
+/** @var IndexErrorsArray $indexErrors */
+$indexErrors = $store->maintenance()->send($getIndexErrorsOp);
+
+// indexErrors will contain errors only for index "Orders/Totals"
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`// Available overloads:
+GetIndexErrorsOperation() // Get errors for all indexes
+GetIndexErrorsOperation(array $indexNames) // Get errors for specific indexes
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **$indexNames** | `array` | List of index names to get errors for |
+
+| `$getIndexErrorsOp` operation Return value | Description |
+| - | - |
+| `?IndexingErrorArray` | List of `IndexingError` classes - see definition below. An exception is thrown if any of the specified indexes doesn't exist. |
+
+
+
+
+
+{`public class IndexErrors
+\{
+ private ?string $name = null; // Index name
+ private ?IndexingErrorArray $errors = null; // List of errors for this index
+
+ // ... getters and setters
+\}
+`}
+
+
+
+
+
+{`public class IndexingError
+\{
+ // The error message
+ private ?string $error = null;
+
+ // Time of error
+ private ?DateTimeInterface $timestamp = null;
+
+ // If Action is 'Map' - field will contain the document ID
+ // If Action is 'Reduce' - field will contain the Reduce key value
+ // For all other Actions - field will be null
+ private ?string $document = null;
+
+ // Area where error has occurred, e.g. Map/Reduce/Analyzer/Memory/etc.
+ private ?string $action = null;
+
+ // ... getters and setters
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-index-errors-python.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-index-errors-python.mdx
new file mode 100644
index 0000000000..8dce945c5f
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-index-errors-python.mdx
@@ -0,0 +1,115 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `GetIndexErrorsOperation` to get errors encountered during indexing.
+
+* The index errors will be retrieved only from the server node defined by the current [client-configuration](../../../../client-api/configuration/load-balance/overview.mdx#client-logic-for-choosing-a-node).
+
+* To learn about clearing index errors, see [delete index errors](../../../../client-api/operations/maintenance/indexes/delete-index-errors.mdx).
+
+* In this page:
+ * [Get errors for all indexes](../../../../client-api/operations/maintenance/indexes/get-index-errors.mdx#get-errors-for-all-indexes)
+ * [Get errors for specific indexes](../../../../client-api/operations/maintenance/indexes/get-index-errors.mdx#get-errors-for-specific-indexes)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/get-index-errors.mdx#syntax)
+
+
+## Get errors for all indexes
+
+
+
+{`# Define the get index errors operation
+get_index_errors_op = GetIndexErrorsOperation()
+
+# Execute the operation by passing it to maintenance.send
+index_errors = store.maintenance.send(get_index_errors_op)
+
+# index_errors will contain errors for ALL indexes
+`}
+
+
+
+
+
+## Get errors for specific indexes
+
+
+
+{`# Define the get index errors operation for specific indexes
+get_index_errors_op = GetIndexErrorsOperation("Orders/Totals")
+
+# Execute the operation by passing it to maintenance.send
+# An exception will be thrown if any of the specified indexes do not exist
+index_errors = store.maintenance.send(get_index_errors_op)
+
+# index_errors will contain errors only for index "Orders/Totals"
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`class GetIndexErrorsOperation(MaintenanceOperation[List[IndexErrors]]):
+ def __init__(self, *index_names: str): # If no index_names provided, get errors for all indexes
+ ...
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **\*index_names** | `str` | List of index names to get errors for |
+
+| Return value of `store.maintenance.send(GetIndexErrorsOperation)` | Description |
+| - | - |
+| `List[IndexErrors]` | List of `IndexErrors` classes - see definition below. An exception is thrown if any of the specified indexes doesn't exist. |
+
+
+
+
+
+{`class IndexErrors:
+ def __init__(self, name: Optional[str] = None, errors: Optional[List[IndexingError]] = None):
+ self.name = name # Index name
+ self.errors = errors # List of errors for this index
+`}
+
+
+
+
+
+{`class IndexingError:
+ def __init__(
+ self,
+ error: Optional[str] = None,
+ timestamp: Optional[datetime.datetime] = None,
+ document: Optional[str] = None,
+ action: Optional[str] = None,
+ ):
+ # Error message
+ self.error = error
+
+ # Time of error
+ self.timestamp = timestamp
+
+ # If action is 'Map' - field will contain the document ID
+ # If action is 'Reduce' - field will contain the Reduce key value
+ # For all other actions - field will be None
+ self.document = document
+
+ # Area where error has occurred, e.g. Map/Reduce/Analyzer/Memory/etc.
+ self.action = action
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-index-java.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-index-java.mdx
new file mode 100644
index 0000000000..3522b23556
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-index-java.mdx
@@ -0,0 +1,36 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+**GetIndexOperation** is used to retrieve an index definition from a database.
+
+### Syntax
+
+
+
+{`public GetIndexOperation(String indexName)
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **indexName** | String | name of an index |
+
+| Return Value | |
+| ------------- | ----- |
+| `IndexDefinition` | Instance of IndexDefinition representing index. |
+
+### Example
+
+
+
+{`IndexDefinition index
+ = store.maintenance()
+ .send(new GetIndexOperation("Orders/Totals"));
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-index-names-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-index-names-csharp.mdx
new file mode 100644
index 0000000000..898c14d18a
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-index-names-csharp.mdx
@@ -0,0 +1,68 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `GetIndexNamesOperation` to retrieve multiple **index names** from the database.
+
+* In this page:
+ * [Get index names example](../../../../client-api/operations/maintenance/indexes/get-index-names.mdx#get-index-names-example)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/get-index-names.mdx#syntax)
+
+
+## Get index names example
+
+
+
+
+{`// Define the get index names operation
+// Pass number of indexes to skip & number of indexes to retrieve
+var getIndexNamesOp = new GetIndexNamesOperation(0, 10);
+
+// Execute the operation by passing it to Maintenance.Send
+string[] indexNames = store.Maintenance.Send(getIndexNamesOp);
+
+// indexNames will contain the first 10 indexes, alphabetically ordered
+`}
+
+
+
+
+{`// Define the get index names operation
+// Pass number of indexes to skip & number of indexes to retrieve
+var getIndexNamesOp = new GetIndexNamesOperation(0, 10);
+
+// Execute the operation by passing it to Maintenance.SendAsync
+string[] indexNames = await store.Maintenance.SendAsync(getIndexNamesOp);
+
+// indexNames will contain the first 10 indexes, alphabetically ordered
+`}
+
+
+
+
+
+
+## Syntax
+
+
+
+{`public GetIndexNamesOperation(int start, int pageSize)
+`}
+
+
+
+| Parameters | Type | Description |
+| - |- | - |
+| **start** | `int` | Number of index names to skip |
+| **pageSize** | `int` | Number of index names to retrieve |
+
+| Return Value of `store.Maintenance.Send(getIndexNamesOp)` | Description |
+| - | - |
+| `string[]` | A list of index names. Alphabetically ordered. |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-index-names-java.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-index-names-java.mdx
new file mode 100644
index 0000000000..bfb8903171
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-index-names-java.mdx
@@ -0,0 +1,38 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+**GetIndexNamesOperation** is used to retrieve multiple index names from a database.
+
+### Syntax
+
+
+
+{`public GetIndexNamesOperation(int start, int pageSize)
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **start** | int | Number of index names that should be skipped |
+| **pageSize** | int | Maximum number of index names that will be retrieved |
+
+| Return Value | |
+| ------------- | ----- |
+| String[] | This method returns an array of index **name** as a result. |
+
+### Example
+
+
+
+{`String[] indexNames
+ = store.maintenance()
+ .send(new GetIndexNamesOperation(0, 10));
+`}
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-index-names-nodejs.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-index-names-nodejs.mdx
new file mode 100644
index 0000000000..c298d48c36
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-index-names-nodejs.mdx
@@ -0,0 +1,53 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `GetIndexNamesOperation` to retrieve multiple **index names** from the database.
+
+* In this page:
+ * [Get index names example](../../../../client-api/operations/maintenance/indexes/get-index-names.mdx#get-index-names-example)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/get-index-names.mdx#syntax)
+
+
+## Get index names example
+
+
+
+{`// Define the get index names operation
+// Pass number of indexes to skip & number of indexes to retrieve
+const getIndexNamesOp = new GetIndexNamesOperation(0, 10);
+
+// Execute the operation by passing it to maintenance.send
+const indexNames = await store.maintenance.send(getIndexNamesOp);
+
+// indexNames will contain the first 10 indexes, alphabetically ordered
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`const getIndexNamesOp = new GetIndexNamesOperation(start, pageSize);
+`}
+
+
+
+| Parameters | Type | Description |
+| - |- | - |
+| **start** | `number` | Number of index names to skip |
+| **pageSize** | `number` | Number of index names to retrieve |
+
+| Return Value of `store.maintenance.send(getIndexNamesOp)` | Description |
+| - | - |
+| `string[]` | A list of index names. Alphabetically ordered. |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-index-names-php.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-index-names-php.mdx
new file mode 100644
index 0000000000..f5c1de5b89
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-index-names-php.mdx
@@ -0,0 +1,33 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `GetIndexNamesOperation` to retrieve multiple **index names** from the database.
+
+* In this page:
+ * [Get index names example](../../../../client-api/operations/maintenance/indexes/get-index-names.mdx#get-index-names-example)
+
+
+## Get index names example
+
+
+
+{`// Define the get index names operation
+// Pass number of indexes to skip & number of indexes to retrieve
+$getIndexNamesOp = new GetIndexNamesOperation(0, 10);
+
+// Execute the operation by passing it to Maintenance.Send
+/** @var StringArrayResult $indexNames */
+$indexNames = $store->maintenance()->send($getIndexNamesOp);
+
+// indexNames will contain the first 10 indexes, alphabetically ordered
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-index-names-python.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-index-names-python.mdx
new file mode 100644
index 0000000000..6307640a28
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-index-names-python.mdx
@@ -0,0 +1,54 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `GetIndexNamesOperation` to retrieve multiple **index names** from the database.
+
+* In this page:
+ * [Get index names example](../../../../client-api/operations/maintenance/indexes/get-index-names.mdx#get-index-names-example)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/get-index-names.mdx#syntax)
+
+
+## Get index names example
+
+
+
+{`# Define the get index names operation
+# Pass number of indexes to skip & number of indexes to retrieve
+get_index_names_op = GetIndexNamesOperation(0, 10)
+
+# Execute the operation by passing it to maintenance.send
+index_names = store.maintenance.send(get_index_names_op)
+
+# index_names will contain the first 10 indexes, alphabetically ordered
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`class GetIndexNamesOperation(MaintenanceOperation):
+ def __init__(self, start: int, page_size: int): ...
+`}
+
+
+
+| Parameters | Type | Description |
+| - |- | - |
+| **start** | `int` | Number of index names to skip |
+| **page_size** | `int` | Number of index names to retrieve |
+
+| Return Value of `store.maintenance.send(GetIndexNamesOperation)` | Description |
+| - | - |
+| `str[]` | A list of index names. Alphabetically ordered. |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-index-nodejs.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-index-nodejs.mdx
new file mode 100644
index 0000000000..7f5b7d5b2f
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-index-nodejs.mdx
@@ -0,0 +1,62 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `GetIndexOperation` to retrieve the **index definition** from the database.
+
+* The operation will execute on the node defined by the [client configuration](../../../../client-api/configuration/load-balance/overview.mdx#client-logic-for-choosing-a-node).
+ However, the index definition returned is taken from the database record,
+ which is common to all the database-group nodes.
+ i.e., an index state change done only on a local node is not reflected.
+
+* To get the index state on the local node use `GetIndexStatisticsOperation`.
+
+* In this page:
+ * [Get Index example](../../../../client-api/operations/maintenance/indexes/get-index.mdx#get-index-example)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/get-index.mdx#syntax)
+
+
+## Get Index example
+
+
+
+{`// Define the get index operation, pass the index name
+const getIndexOp = new GetIndexOperation("Orders/Totals");
+
+// Execute the operation by passing it to maintenance.send
+const indexDefinition = await store.maintenance.send(getIndexOp);
+
+// Access the index definition
+const state = indexDefinition.state;
+const lockMode = indexDefinition.lockMode;
+const deploymentMode = indexDefinition.deploymentMode;
+// etc.
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`const getIndexOp = new GetIndexOperation(indexName);
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **indexName** | `string` | Name of index to get |
+
+| Return value of `store.maintenance.send(getIndexOp)` | Description |
+|- | - |
+| `IndexDefinition` | An instance of class [IndexDefinition](../../../../client-api/operations/maintenance/indexes/put-indexes.mdx#indexdefinition) |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-index-php.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-index-php.mdx
new file mode 100644
index 0000000000..ed48747e22
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-index-php.mdx
@@ -0,0 +1,57 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `GetIndexOperation` to retrieve an index definition from the database.
+
+* The operation will execute on the node defined by the [client configuration](../../../../client-api/configuration/load-balance/overview.mdx#client-logic-for-choosing-a-node).
+ However, the index definition returned is taken from the database record,
+ which is common to all the database-group nodes.
+ i.e., an index state change done only on a local node is not reflected.
+
+* To get the index state on the local node use `GetIndexStatisticsOperation`.
+
+* In this page:
+ * [Get Index example](../../../../client-api/operations/maintenance/indexes/get-index.mdx#get-index-example)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/get-index.mdx#syntax)
+
+
+## Get Index example
+
+
+
+{`// Define the get index operation, pass the index name
+$getIndexOp = new GetIndexOperation("Orders/Totals");
+
+// Execute the operation by passing it to Maintenance.Send
+/** @var IndexDefinition $index */
+$index = $store->maintenance()->send($getIndexOp);
+
+// Access the index definition
+$state = $index->getState();
+$lockMode = $index->getLockMode();
+$deploymentMode = $index->getDeploymentMode();
+// etc.
+`}
+
+
+
+### Syntax
+
+
+
+{`GetIndexOperation(?string $indexName)
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **$indexName** | `?string` | Name of index to get |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-index-python.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-index-python.mdx
new file mode 100644
index 0000000000..ca8d6b442f
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-index-python.mdx
@@ -0,0 +1,63 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `GetIndexOperation` to retrieve an index definition from the database.
+
+* The operation will execute on the node defined by the [client configuration](../../../../client-api/configuration/load-balance/overview.mdx#client-logic-for-choosing-a-node).
+ However, the index definition returned is taken from the database record,
+ which is common to all the database-group nodes.
+ i.e., an index state change done only on a local node is not reflected.
+
+* To get the index state on the local node use `GetIndexStatisticsOperation`.
+
+* In this page:
+ * [Get Index example](../../../../client-api/operations/maintenance/indexes/get-index.mdx#get-index-example)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/get-index.mdx#syntax)
+
+
+## Get Index example
+
+
+
+{`# Define the get index operation, pass the index name
+get_index_op = GetIndexOperation("Orders/Totals")
+
+# Execute the operation by passing it to maintenance.send
+index = store.maintenance.send(get_index_op)
+
+# Access the index definition
+state = index.state
+lock_mode = index.lock_mode
+deployment_mode = index.deployment_mode
+# etc.
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`class GetIndexOperation(MaintenanceOperation[IndexDefinition]):
+ def __init__(self, index_name: str): ...
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **index_name** | `str` | Name of index to get |
+
+| Return value of `store.maintenance.send(GetIndexOperation)` | Description |
+|- | - |
+| `IndexDefinition` | An instance of class [IndexDefinition](../../../../client-api/operations/maintenance/indexes/put-indexes.mdx#indexdefinition) |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-indexes-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-indexes-csharp.mdx
new file mode 100644
index 0000000000..447218fbf8
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-indexes-csharp.mdx
@@ -0,0 +1,87 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `GetIndexesOperation` to retrieve multiple **index definitions** from the database.
+
+* The operation will execute on the node defined by the [client configuration](../../../../client-api/configuration/load-balance/overview.mdx#client-logic-for-choosing-a-node).
+ However, the index definitions returned are taken from the database record,
+ which is common to all the database-group nodes.
+ i.e., an index state change done only on a local node is not reflected.
+
+* To get a specific index state on a local node use `GetIndexStatisticsOperation`.
+
+* In this page:
+ * [Get Indexes example](../../../../client-api/operations/maintenance/indexes/get-indexes.mdx#get-indexes-example)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/get-indexes.mdx#syntax)
+
+
+## Get Indexes example
+
+
+
+
+{`// Define the get indexes operation
+// Pass number of indexes to skip & number of indexes to retrieve
+var getIndexesOp = new GetIndexesOperation(0, 10);
+
+// Execute the operation by passing it to Maintenance.Send
+IndexDefinition[] indexes = store.Maintenance.Send(getIndexesOp);
+
+// indexes will contain the first 10 indexes, alphabetically ordered by index name
+// Access an index definition from the resulting list:
+var name = indexes[0].Name;
+var state = indexes[0].State;
+var lockMode = indexes[0].LockMode;
+var deploymentMode = indexes[0].DeploymentMode;
+// etc.
+`}
+
+
+
+
+{`// Define the get indexes operation
+// Pass number of indexes to skip & number of indexes to retrieve
+var getIndexesOp = new GetIndexesOperation(0, 10);
+
+// Execute the operation by passing it to Maintenance.SendAsync
+IndexDefinition[] indexes = await store.Maintenance.SendAsync(getIndexesOp);
+
+// indexes will contain the first 10 indexes, alphabetically ordered by index name
+// Access an index definition from the resulting list:
+var name = indexes[0].Name;
+var state = indexes[0].State;
+var lockMode = indexes[0].LockMode;
+var deploymentMode = indexes[0].DeploymentMode;
+// etc.
+`}
+
+
+
+
+
+
+## Syntax
+
+
+
+{`public GetIndexesOperation(int start, int pageSize)
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **start** | `int` | Number of indexes to skip |
+| **pageSize** | `int` | Number of indexes to retrieve |
+
+| Return value of `store.Maintenance.Send(getIndexesOp)` | Description |
+| - | - |
+| `IndexDefinition[]` | A list of [IndexDefinition](../../../../client-api/operations/maintenance/indexes/put-indexes.mdx#indexdefinition) classes, ordered alphabetically by index name. |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-indexes-java.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-indexes-java.mdx
new file mode 100644
index 0000000000..976106d376
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-indexes-java.mdx
@@ -0,0 +1,37 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+**GetIndexesOperation** is used to retrieve multiple index definitions from a database.
+
+### Syntax
+
+
+
+{`public GetIndexesOperation(int start, int pageSize)
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **start** | int | Number of indexes that should be skipped |
+| **pageSize** | int | Maximum number of indexes that will be retrieved |
+
+| Return Value | |
+| ------------- | ----- |
+| `IndexDefinition` | Instance of IndexDefinition representing index. |
+
+### Example
+
+
+
+{`IndexDefinition[] indexes
+ = store.maintenance()
+ .send(new GetIndexesOperation(0, 10));
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-indexes-nodejs.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-indexes-nodejs.mdx
new file mode 100644
index 0000000000..11d6b925d1
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-indexes-nodejs.mdx
@@ -0,0 +1,66 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `GetIndexesOperation` to retrieve multiple **index definitions** from the database.
+
+* The operation will execute on the node defined by the [client configuration](../../../../client-api/configuration/load-balance/overview.mdx#client-logic-for-choosing-a-node).
+ However, the index definitions returned are taken from the database record,
+ which is common to all the database-group nodes.
+ i.e., an index state change done only on a local node is not reflected.
+
+* To get a specific index state on a local node use `GetIndexStatisticsOperation`.
+
+* In this page:
+ * [Get Indexes example](../../../../client-api/operations/maintenance/indexes/get-indexes.mdx#get-indexes-example)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/get-indexes.mdx#syntax)
+
+
+## Get Indexes example
+
+
+
+{`// Define the get indexes operation
+// Pass number of indexes to skip & number of indexes to retrieve
+const getIndexesOp = new GetIndexesOperation(0, 10);
+
+// Execute the operation by passing it to maintenance.send
+const indexes = await store.maintenance.send(getIndexesOp);
+
+// indexes will contain the first 10 indexes, alphabetically ordered by index name
+// Access an index definition from the resulting list:
+const name = indexes[0].name;
+const state = indexes[0].state;
+const lockMode = indexes[0].lockMode;
+const deploymentMode = indexes[0].deploymentMode;
+// etc.
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`const getIndexesOp = new GetIndexesOperation(start, pageSize);
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **start** | `number` | Number of indexes to skip |
+| **pageSize** | `number` | Number of indexes to retrieve |
+
+| Return value of `store.maintenance.send(getIndexesOp)` | Description |
+| - | - |
+| `IndexDefinition[]` | A list of [IndexDefinition](../../../../client-api/operations/maintenance/indexes/put-indexes.mdx#indexdefinition), ordered alphabetically by index name. |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-indexes-php.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-indexes-php.mdx
new file mode 100644
index 0000000000..f7704d71a3
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-indexes-php.mdx
@@ -0,0 +1,61 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `GetIndexesOperation` to retrieve multiple **index definitions** from the database.
+
+* The operation will execute on the node defined by the [client configuration](../../../../client-api/configuration/load-balance/overview.mdx#client-logic-for-choosing-a-node).
+ However, the index definitions returned are taken from the database record,
+ which is common to all the database-group nodes.
+ i.e., an index state change done only on a local node is not reflected.
+
+* To get a specific index state on a local node use `GetIndexStatisticsOperation`.
+
+* In this page:
+ * [Get Indexes example](../../../../client-api/operations/maintenance/indexes/get-indexes.mdx#get-indexes-example)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/get-indexes.mdx#syntax)
+
+
+## Get Indexes example
+
+
+
+{`// Define the get indexes operation
+// Pass number of indexes to skip & number of indexes to retrieve
+$getIndexesOp = new GetIndexesOperation(0, 10);
+
+// Execute the operation by passing it to Maintenance.Send
+/** @var IndexDefinitionArray $indexes */
+$indexes = $store->maintenance()->send($getIndexesOp);
+
+// indexes will contain the first 10 indexes, alphabetically ordered by index name
+// Access an index definition from the resulting list:
+$name = $indexes[0]->getName();
+$state = $indexes[0]->getState();
+$lockMode = $indexes[0]->getLockMode();
+$deploymentMode = $indexes[0]->getDeploymentMode();
+// etc.
+`}
+
+
+
+#### Syntax
+
+
+
+{`GetIndexesOperation(int $start, int $pageSize)
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **$start** | `int` | Number of indexes to skip |
+| **$pageSize** | `int` | Number of indexes to retrieve |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-indexes-python.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-indexes-python.mdx
new file mode 100644
index 0000000000..3b0748f054
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-indexes-python.mdx
@@ -0,0 +1,67 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `GetIndexesOperation` to retrieve multiple **index definitions** from the database.
+
+* The operation will execute on the node defined by the [client configuration](../../../../client-api/configuration/load-balance/overview.mdx#client-logic-for-choosing-a-node).
+ However, the index definitions returned are taken from the database record,
+ which is common to all the database-group nodes.
+ i.e., an index state change done only on a local node is not reflected.
+
+* To get a specific index state on a local node use `GetIndexStatisticsOperation`.
+
+* In this page:
+ * [Get Indexes example](../../../../client-api/operations/maintenance/indexes/get-indexes.mdx#get-indexes-example)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/get-indexes.mdx#syntax)
+
+
+## Get Indexes example
+
+
+
+{`# Define the get indexes operation
+# Pass number of indexes to skip & number of indexes to retrieve
+get_index_op = GetIndexesOperation(0, 10)
+
+# Execute the operation by passing it to maintenance.send
+indexes = store.maintenance.send(get_index_op)
+
+# indexes will contain the first 10 indexes, alphabetically ordered by index name
+# Access an index definition from the resulting list:
+name = indexes[0].name
+state = indexes[0].state
+lock_mode = indexes[0].lock_mode
+deployment_mode = indexes[0].deployment_mode
+# etc.
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`class GetIndexesOperation(MaintenanceOperation[List[IndexDefinition]]):
+ def __init__(self, start: int, page_size: int): ...
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **start** | `int` | Number of indexes to skip |
+| **page_size** | `int` | Number of indexes to retrieve |
+
+| Return value of `store.Maintenance.Send(getIndexesOp)` | Description |
+| - | - |
+| `IndexDefinition[]` | A list of [IndexDefinition](../../../../client-api/operations/maintenance/indexes/put-indexes.mdx#indexdefinition) classes, ordered alphabetically by index name. |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-terms-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-terms-csharp.mdx
new file mode 100644
index 0000000000..63080e64a2
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-terms-csharp.mdx
@@ -0,0 +1,70 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `GetTermsOperation` to retrieve the **terms of an index-field**.
+
+* In this page:
+ * [Get Terms example](../../../../client-api/operations/maintenance/indexes/get-terms.mdx#get-terms-example)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/get-terms.mdx#syntax)
+
+
+## Get Terms example
+
+
+
+
+{`// Define the get terms operation
+// Pass the requested index-name, index-field, start value & page size
+var getTermsOp = new GetTermsOperation("Orders/Totals", "Employee", "employees/5-a", 10);
+
+// Execute the operation by passing it to Maintenance.Send
+string[] fieldTerms = store.Maintenance.Send(getTermsOp);
+
+// fieldTerms will contain the all terms that come after term 'employees/5-a' for index-field 'Employee'
+`}
+
+
+
+
+{`// Define the get terms operation
+// Pass the requested index-name, index-field, start value & page size
+var getTermsOp = new GetTermsOperation("Orders/Totals", "Employee", "employees/5-a", 10);
+
+// Execute the operation by passing it to Maintenance.SendAsync
+string[] fieldTerms = await store.Maintenance.SendAsync(getTermsOp);
+
+// fieldTerms will contain the all terms that come after term 'employees/5-a' for index-field 'Employee'
+`}
+
+
+
+
+
+
+## Syntax
+
+
+
+{`public GetTermsOperation(string indexName, string field, string fromValue, int? pageSize = null)
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **indexName** | `string` | Name of an index to get terms for |
+| **field** | `string` | Name of index-field to get terms for |
+| **fromValue** | `string` | The starting term from which to return results. This term is not included in the results. `null` - start from first term. |
+| **pageSize** | `int?` | Number of terms to get. `null` - return all terms. |
+
+| Return value of `store.Maintenance.Send(getTermsOp)` | Description |
+| - |- |
+| string[] | List of terms for the requested index-field. Alphabetically ordered. |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-terms-java.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-terms-java.mdx
new file mode 100644
index 0000000000..67c7e22419
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-terms-java.mdx
@@ -0,0 +1,43 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+The **GetTermsOperation** will retrieve stored terms for a field of an index.
+
+## Syntax
+
+
+
+{`public GetTermsOperation(String indexName, String field, String fromValue)
+
+public GetTermsOperation(String indexName, String field, String fromValue, Integer pageSize)
+`}
+
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **indexName** | String | Name of an index to get terms for |
+| **field** | String | Name of field to get terms for |
+| **fromValue** | String | The starting term from which to return results |
+| **pageSize** | Integer | Number of terms to get |
+
+| Return Value | |
+| ------------- | ----- |
+| String[] | List of terms for the requested index-field. Alphabetically ordered. |
+
+## Example
+
+
+
+{`String[] terms = store
+ .maintenance()
+ .send(
+ new GetTermsOperation("Orders/Totals", "Employee", null));
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-terms-nodejs.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-terms-nodejs.mdx
new file mode 100644
index 0000000000..6861ee636e
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-terms-nodejs.mdx
@@ -0,0 +1,57 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `GetTermsOperation` to retrieve the **terms of an index-field**.
+
+* In this page:
+ * [Get Terms example](../../../../client-api/operations/maintenance/indexes/get-terms.mdx#get-terms-example)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/get-terms.mdx#syntax)
+
+
+## Get Terms example
+
+
+
+{`// Define the get terms operation
+// Pass the requested index-name, index-field, start value & page size
+const getTermsOp = new GetTermsOperation("Orders/Totals", "Employee", "employees/5-a", 10);
+
+// Execute the operation by passing it to maintenance.send
+const fieldTerms = await store.maintenance.send(getTermsOp);
+
+// fieldTerms will contain the all terms that come after term 'employees/5-a' for index-field 'Employee'
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`// Available overloads:
+const getTermsOp = new GetTermsOperation(indexName, field, fromValue);
+const getTermsOp = new GetTermsOperation(indexName, field, fromValue, pageSize);
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **indexName** | `string` | Name of an index to get terms for |
+| **field** | `string` | Name of index-field to get terms for |
+| **fromValue** | `string` | The starting term from which to return results. This term is not included in the results. `null` - start from first term. |
+| **pageSize** | `number` | Number of terms to get. `undefined/null` - return all terms. |
+
+| Return value of `store.maintenance.send(getTermsOp)` | Description |
+| - |- |
+| `string[]` | List of terms for the requested index-field. Alphabetically ordered. |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-terms-php.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-terms-php.mdx
new file mode 100644
index 0000000000..347151f4d6
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-terms-php.mdx
@@ -0,0 +1,52 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `GetTermsOperation` to retrieve the **terms of an index-field**.
+
+* In this page:
+ * [Get Terms example](../../../../client-api/operations/maintenance/indexes/get-terms.mdx#get-terms-example)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/get-terms.mdx#syntax)
+
+
+## Get Terms example
+
+
+
+{`// Define the get terms operation
+// Pass the requested index-name, index-field, start value & page size
+$getTermsOp = new GetTermsOperation("Orders/Totals", "Employee", "employees/5-a", 10);
+
+// Execute the operation by passing it to Maintenance.Send
+/** @var StringArrayResult $fieldTerms */
+$fieldTerms = $store->maintenance()->send($getTermsOp);
+
+// fieldTerms will contain the all terms that come after term 'employees/5-a' for index-field 'Employee'
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`GetTermsOperation(?string $indexName, ?string $field, ?string $fromValue, ?int $pageSize = null)
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **$indexName** | `?string` | Name of an index to get terms for |
+| **$field** | `?string` | Name of index-field to get terms for |
+| **$fromValue** | `?string` | The starting term to return results from. This term is not included in the results. `None` - start from first term. |
+| **$pageSize** | `?int` | Number of terms to get. `None` - return all terms. |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-terms-python.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-terms-python.mdx
new file mode 100644
index 0000000000..4d70ed6705
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_get-terms-python.mdx
@@ -0,0 +1,56 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `GetTermsOperation` to retrieve the **terms of an index-field**.
+
+* In this page:
+ * [Get Terms example](../../../../client-api/operations/maintenance/indexes/get-terms.mdx#get-terms-example)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/get-terms.mdx#syntax)
+
+
+## Get Terms example
+
+
+
+{`# Define the get terms operation
+# Pass the requested index-name, index-filed, start value & page size
+get_terms_op = GetTermsOperation("Orders/Totals", "Employee", "employees/5-A", 10)
+
+# Execute the operation by passing it to maintenance.send
+field_terms = store.maintenance.send(get_terms_op)
+
+# field_terms will contain alle the terms that come after term 'employees/5-A' for index-field 'Employee'
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`class GetTermsOperation(MaintenanceOperation[List[str]]):
+ def __init__(self, index_name: str, field: str, from_value: Optional[str], page_size: int = None): ...
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **index_name** | `str` | Name of an index to get terms for |
+| **field** | `str` | Name of index-field to get terms for |
+| **from_value** | `str` (optional) | The starting term from which to return results. This term is not included in the results. `None` - start from first term. |
+| **page_size** | `int` | Number of terms to get. `None` - return all terms. |
+
+| Return value of `store.maintenance.send(GetTermsOperation)` | Description |
+| - |- |
+| `List[str]` | List of terms for the requested index-field. Alphabetically ordered. |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_index-has-changed-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_index-has-changed-csharp.mdx
new file mode 100644
index 0000000000..bd1c75649c
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_index-has-changed-csharp.mdx
@@ -0,0 +1,95 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* **When deploying an index**:
+ * If the new index definition is **different** from the current index definition on the server,
+ the current index will be overwritten and data will be re-indexed according to the new index definition.
+ * If the new index definition is the **same** as the one currently deployed on the server,
+ it will not be overwritten and re-indexing will not occur upon deploying the index.
+
+* **Prior to deploying an index:**,
+ * Use `IndexHasChangedOperation` to check if the new index definition differs from the one
+ on the server to avoid any unwanted changes to the existing indexed data.
+
+* In this page:
+ * [Check if index has changed](../../../../client-api/operations/maintenance/indexes/index-has-changed.mdx#check-if-index-has-changed)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/index-has-changed.mdx#syntax)
+
+
+## Check if index has changed
+
+
+
+
+
+{`// Some index definition
+var indexDefinition = new IndexDefinition
+{
+ Name = "UsersByName",
+ Maps = { "from user in docs.Users select new { user.Name }"}
+};
+
+// Define the has-changed operation, pass the index definition
+var indexHasChangedOp = new IndexHasChangedOperation(indexDefinition);
+
+// Execute the operation by passing it to Maintenance.Send
+bool indexHasChanged = store.Maintenance.Send(indexHasChangedOp);
+
+// Return values:
+// false: The definition of the index passed is the SAME as the one deployed on the server
+// true: The definition of the index passed is DIFFERENT than the one deployed on the server
+// Or - index does not exist
+`}
+
+
+
+
+{`// Some index definition
+var indexDefinition = new IndexDefinition
+{
+ Name = "UsersByName",
+ Maps = { "from user in docs.Users select new { user.Name }"}
+};
+
+// Define the has-changed operation, pass the index definition
+var indexHasChangedOp = new IndexHasChangedOperation(indexDefinition);
+
+// Execute the operation by passing it to Maintenance.SendAsync
+bool indexHasChanged = await store.Maintenance.SendAsync(indexHasChangedOp);
+
+// Return values:
+// false: The definition of the index passed is the SAME as the one deployed on the server
+// true: The definition of the index passed is DIFFERENT than the one deployed on the server
+// Or - index does not exist
+`}
+
+
+
+
+
+
+## Syntax
+
+
+
+{`public IndexHasChangedOperation(IndexDefinition definition)
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **definition** | [IndexDefinition](../../../../client-api/operations/maintenance/indexes/put-indexes.mdx#indexdefinition) | The index definition to check |
+
+| Return Value | Description |
+| - | - |
+| `true` | When the index **does not exist** on the server or - When the index definition **is different** from the one deployed on the server |
+| `false` | When the index definition is **the same** as the one deployed on the server |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_index-has-changed-java.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_index-has-changed-java.mdx
new file mode 100644
index 0000000000..3419bf22e8
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_index-has-changed-java.mdx
@@ -0,0 +1,37 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+**IndexHasChangedOperation** will let you check if the given index definition differs from the one on a server. This might be useful when you want to check the prior index deployment, if the index will be overwritten, and if indexing data will be lost.
+
+## Syntax
+
+
+
+{`public IndexHasChangedOperation(IndexDefinition definition)
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **indexDef** | `IndexDefinition` | index definition |
+
+| Return Value | |
+| ------------- | ----- |
+| true | if an index **does not exist** on a server |
+| true | if an index definition **does not match** the one from the **indexDef** parameter |
+| false | if there are no differences between an index definition on the server and the one from the **indexDef** parameter |
+
+## Example
+
+
+
+{`Boolean ordersIndexHasChanged =
+ store.maintenance().send(new IndexHasChangedOperation(ordersIndexDefinition));
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_index-has-changed-nodejs.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_index-has-changed-nodejs.mdx
new file mode 100644
index 0000000000..1163f91954
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_index-has-changed-nodejs.mdx
@@ -0,0 +1,68 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* **When deploying an index**:
+ * If the new index definition is **different** from the current index definition on the server,
+ the current index will be overwritten and data will be re-indexed according to the new index definition.
+ * If the new index definition is the **same** as the one currently deployed on the server,
+ it will not be overwritten and re-indexing will not occur upon deploying the index.
+
+* **Prior to deploying an index:**,
+ * Use `IndexHasChangedOperation` to check if the new index definition differs from the one
+ on the server to avoid any unwanted changes to the existing indexed data.
+
+* In this page:
+ * [Check if index has changed](../../../../client-api/operations/maintenance/indexes/index-has-changed.mdx#check-if-index-has-changed)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/index-has-changed.mdx#syntax)
+
+
+## Check if index has changed
+
+
+
+{`// Some index definition
+const indexDefinition = new IndexDefinition();
+indexDefinition.name = "UsersByName";
+indexDefinition.maps = new Set([ \`from user in docs.Users select new \{ user.Name \}\` ]);
+
+// Define the has-changed operation, pass the index definition
+const indexHasChangedOp = new IndexHasChangedOperation(indexDefinition);
+
+// Execute the operation by passing it to maintenance.send
+const indexHasChanged = await documentStore.maintenance.send(indexHasChangedOp);
+
+// Return values:
+// false: The definition of the index passed is the SAME as the one deployed on the server
+// true: The definition of the index passed is DIFFERENT than the one deployed on the server
+// Or - index does not exist
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`const indexHasChangedOp = new IndexHasChangedOperation(definition);
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **definition** | [IndexDefinition](../../../../client-api/operations/maintenance/indexes/put-indexes.mdx#indexdefinition) | The index definition to check |
+
+| Return Value | Description |
+| - | - |
+| `true` | When the index **does not exist** on the server or - When the index definition **is different** from the one deployed on the server |
+| `false` | When the index definition is **the same** as the one deployed on the server |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_index-has-changed-php.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_index-has-changed-php.mdx
new file mode 100644
index 0000000000..414881a6e5
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_index-has-changed-php.mdx
@@ -0,0 +1,68 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* **When deploying an index**:
+ * If the new index definition is **different** from the current index definition on the server,
+ the current index will be overwritten and data will be re-indexed according to the new index definition.
+ * If the new index definition is the **same** as the one currently deployed on the server,
+ it will not be overwritten and re-indexing will not occur upon deploying the index.
+
+* **Prior to deploying an index:**,
+ * Use `IndexHasChangedOperation` to check if the new index definition differs from the one
+ on the server to avoid any unwanted changes to the existing indexed data.
+
+* In this page:
+ * [Check if index has changed](../../../../client-api/operations/maintenance/indexes/index-has-changed.mdx#check-if-index-has-changed)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/index-has-changed.mdx#syntax)
+
+
+## Check if index has changed
+
+
+
+{`// Some index definition
+$indexDefinition = new IndexDefinition();
+$indexDefinition->setName("UsersByName");
+$indexDefinition->setMaps(["from user in docs.Users select new \{ user.Name \}"]);
+
+// Define the has-changed operation, pass the index definition
+$indexHasChangedOp = new IndexHasChangedOperation($indexDefinition);
+
+// Execute the operation by passing it to Maintenance.Send
+$store->maintenance()->send($indexHasChangedOp);
+
+// Return values:
+// false: The definition of the index passed is the SAME as the one deployed on the server
+// true: The definition of the index passed is DIFFERENT than the one deployed on the server
+// Or - index does not exist
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`IndexHasChangedOperation(?IndexDefinition $definition)
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **$definition** | [?IndexDefinition](../../../../client-api/operations/maintenance/indexes/put-indexes.mdx#indexdefinition) | The index definition to check |
+
+| Return Value | Description |
+| - | - |
+| `true` | When the index **does not exist** on the server or - When the index definition **is different** from the one deployed on the server |
+| `false` | When the index definition is **the same** as the one deployed on the server |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_index-has-changed-python.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_index-has-changed-python.mdx
new file mode 100644
index 0000000000..8993660adf
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_index-has-changed-python.mdx
@@ -0,0 +1,69 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* **When deploying an index**:
+ * If the new index definition is **different** from the current index definition on the server,
+ the current index will be overwritten and data will be re-indexed according to the new index definition.
+ * If the new index definition is the **same** as the one currently deployed on the server,
+ it will not be overwritten and re-indexing will not occur upon deploying the index.
+
+* **Prior to deploying an index:**,
+ * Use `IndexHasChangedOperation` to check if the new index definition differs from the one
+ on the server to avoid any unwanted changes to the existing indexed data.
+
+* In this page:
+ * [Check if index has changed](../../../../client-api/operations/maintenance/indexes/index-has-changed.mdx#check-if-index-has-changed)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/index-has-changed.mdx#syntax)
+
+
+## Check if index has changed
+
+
+
+{`# Some index definition
+index_definition = IndexDefinition(
+ name="UsersByName", maps=\{"from user in docs.Users select new \{ user.Name \}"\}
+)
+
+# Define the has-changed operation, pass the index definition
+index_has_changed_op = IndexHasChangedOperation(index_definition)
+
+# Execute the operation by passing it to maintenance.send
+index_has_changed = store.maintenance.send(index_has_changed_op)
+
+# Return values:
+# False: The definition of the index passed is the SAME as the one deployed on the server
+# True: The definition of the index passed is DIFFERENT from the one deployed on the server
+# Or - index does not exist
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`class IndexHasChangedOperation(MaintenanceOperation[bool]):
+ def __init__(self, index: IndexDefinition): ...
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **index** | [IndexDefinition](../../../../client-api/operations/maintenance/indexes/put-indexes.mdx#indexdefinition) | The index definition to check |
+
+| Return Value | Description |
+| - | - |
+| `True` | When the index **does not exist** on the server or - When the index definition **is different** from the one deployed on the server |
+| `False` | When the index definition is **the same** as the one deployed on the server |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_put-indexes-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_put-indexes-csharp.mdx
new file mode 100644
index 0000000000..b3e3f2ea25
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_put-indexes-csharp.mdx
@@ -0,0 +1,341 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* There are a few ways to create and deploy indexes in a database.
+
+* This page describes deploying a **static-index** using the `PutIndexesOperation` Operation.
+ For a general description of Operations see [what are operations](../../../../client-api/operations/what-are-operations.mdx).
+
+* In this page:
+ * [Ways to deploy indexes - short summary](../../../../client-api/operations/maintenance/indexes/put-indexes.mdx#ways-to-deploy-indexes---short-summary)
+ * [Put indexes operation with IndexDefinition](../../../../client-api/operations/maintenance/indexes/put-indexes.mdx#put-indexes-operation-with-indexdefinition)
+ * [Put indexes operation with IndexDefinitionBuilder](../../../../client-api/operations/maintenance/indexes/put-indexes.mdx#put-indexes-operation-with-indexdefinitionbuilder)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/put-indexes.mdx#syntax)
+
+
+## Ways to deploy indexes - short summary
+
+
+
+##### Static-indexes:
+
+There are a few ways to deploy a static-index from the Client API:
+
+* The following methods are explained in section [Deploy a static-index](../../../../indexes/creating-and-deploying.mdx#deploy-a-static-index):
+ * Call `Execute()` on a specific index instance.
+ * Call `ExecuteIndex()` or `ExecuteIndexes()` on your _DocumentStore_ object.
+ * Call `IndexCreation.CreateIndexes()`.
+
+* Alternatively, you can execute the `PutIndexesOperation` maintenance operation on the _DocumentStore_, **as explained below**.
+
+
+
+
+##### Auto-indexes:
+
+ * An auto-index is created by the server when making a filtering query that doesn't specify which index to use.
+ Learn more in [Creating auto indexes](../../../../indexes/creating-and-deploying.mdx#auto-indexes).
+
+
+
+
+## Put indexes operation with IndexDefinition
+
+Using `PutIndexesOperation` with **IndexDefinition** allows you to:
+
+ * Choose any name for the index.
+ This string-based name is specified when querying the index.
+ * Set low-level properties available in _IndexDefinition_.
+
+
+
+
+{`// Create an index definition
+var indexDefinition = new IndexDefinition
+{
+ // Name is mandatory, can use any string
+ Name = "OrdersByTotal",
+
+ // Define the index Map functions, string format
+ // A single string for a map-index, multiple strings for a multi-map-index
+ Maps = new HashSet
+ {
+ @"
+ // Define the collection that will be indexed:
+ from order in docs.Orders
+
+ // Define the index-entry:
+ select new
+ {
+ // Define the index-fields within each index-entry:
+ Employee = order.Employee,
+ Company = order.Company,
+ Total = order.Lines.Sum(l => (l.Quantity * l.PricePerUnit) * (1 - l.Discount))
+ }"
+ },
+
+ // Reduce = ...,
+
+ // Can provide other index definitions available on the IndexDefinition class
+ // Override the default values, e.g.:
+ DeploymentMode = IndexDeploymentMode.Rolling,
+ Priority = IndexPriority.High,
+ Configuration = new IndexConfiguration
+ {
+ { "Indexing.IndexMissingFieldsAsNull", "true" }
+ }
+ // See all available properties in syntax below
+};
+
+// Define the put indexes operation, pass the index definition
+// Note: multiple index definitions can be passed, see syntax below
+IMaintenanceOperation putIndexesOp = new PutIndexesOperation(indexDefinition);
+
+// Execute the operation by passing it to Maintenance.Send
+store.Maintenance.Send(putIndexesOp);
+`}
+
+
+
+
+{`// Create an index definition
+var indexDefinition = new IndexDefinition
+{
+ // Name is mandatory, can use any string
+ Name = "OrdersByTotal",
+
+ // Define the index Map functions, string format
+ // A single string for a map-index, multiple strings for a multi-map-index
+ Maps = new HashSet
+ {
+ @"
+ // Define the collection that will be indexed:
+ from order in docs.Orders
+
+ // Define the index-entry:
+ select new
+ {
+ // Define the index-fields within each index-entry:
+ Employee = order.Employee,
+ Company = order.Company,
+ Total = order.Lines.Sum(l => (l.Quantity * l.PricePerUnit) * (1 - l.Discount))
+ }"
+ },
+
+ // Reduce = ...,
+
+ // Can provide other index definitions available on the IndexDefinition class
+ // Override the default values, e.g.:
+ DeploymentMode = IndexDeploymentMode.Rolling,
+ Priority = IndexPriority.High,
+ Configuration = new IndexConfiguration
+ {
+ { "Indexing.IndexMissingFieldsAsNull", "true" }
+ }
+ // See all available properties in syntax below
+};
+
+// Define the put indexes operation, pass the index definition
+// Note: multiple index definitions can be passed, see syntax below
+IMaintenanceOperation putIndexesOp = new PutIndexesOperation(indexDefinition);
+
+// Execute the operation by passing it to Maintenance.SendAsync
+await store.Maintenance.SendAsync(putIndexesOp);
+`}
+
+
+
+
+{`// Create an index definition
+var indexDefinition = new IndexDefinition
+{
+ // Name is mandatory, can use any string
+ Name = "OrdersByTotal",
+
+ // Define the index Map functions, string format
+ // A single string for a map-index, multiple strings for a multi-map-index
+ Maps = new HashSet
+ {
+ @"map('Orders', function(order) {
+ return {
+ Employee: order.Employee,
+ Company: order.Company,
+ Total: order.Lines.reduce(function(sum, l) {
+ return sum + (l.Quantity * l.PricePerUnit) * (1 - l.Discount);
+ }, 0)
+ };
+ });"
+ },
+
+ // Reduce = ...,
+
+ // Can provide other index definitions available on the IndexDefinition class
+ // Override the default values, e.g.:
+ DeploymentMode = IndexDeploymentMode.Rolling,
+ Priority = IndexPriority.High,
+ Configuration = new IndexConfiguration
+ {
+ { "Indexing.IndexMissingFieldsAsNull", "true" }
+ }
+ // See all available properties in syntax below
+};
+
+// Define the put indexes operation, pass the index definition
+// Note: multiple index definitions can be passed, see syntax below
+IMaintenanceOperation putIndexesOp = new PutIndexesOperation(indexDefinition);
+
+// Execute the operation by passing it to Maintenance.Send
+store.Maintenance.Send(putIndexesOp);
+`}
+
+
+
+
+
+
+## Put indexes operation with IndexDefinitionBuilder
+
+Using `PutIndexesOperation` with an IndexDefinition created from an **IndexDefinitionBuilder** allows:
+
+ * Creating an index definition using a strongly typed LINQ syntax.
+ * Setting low-level properties available in _IndexDefinitionBuilder_.
+ * Note:
+ Only map or map-reduce indexes can be generated by the _IndexDefinitionBuilder_.
+ To generate multi-map indexes use the above _IndexDefinition_ option.
+
+
+
+
+{`// Create an index definition builder
+var builder = new IndexDefinitionBuilder
+{
+ // Define the map function, strongly typed LINQ format
+ Map =
+ // Define the collection that will be indexed:
+ orders => from order in orders
+ // Define the index-entry:
+ select new
+ {
+ // Define the index-fields within each index-entry:
+ Employee = order.Employee,
+ Company = order.Company,
+ Total = order.Lines.Sum(l => (l.Quantity * l.PricePerUnit) * (1 - l.Discount))
+ },
+
+ // Can provide other properties available on the IndexDefinitionBuilder class, e.g.:
+ DeploymentMode = IndexDeploymentMode.Rolling,
+ Priority = IndexPriority.High,
+ // Reduce = ..., etc.
+};
+
+// Generate index definition from builder
+// Pass the conventions, needed for building the Maps property
+var indexDefinition = builder.ToIndexDefinition(store.Conventions);
+
+// Optionally, set the index name, can use any string
+// If not provided then default name from builder is used, e.g.: "IndexDefinitionBuildersOfOrders"
+indexDefinition.Name = "OrdersByTotal";
+
+// Define the put indexes operation, pass the index definition
+// Note: multiple index definitions can be passed, see syntax below
+IMaintenanceOperation putIndexesOp = new PutIndexesOperation(indexDefinition);
+
+// Execute the operation by passing it to Maintenance.Send
+store.Maintenance.Send(putIndexesOp);
+`}
+
+
+
+
+{`// Create an index definition builder
+var builder = new IndexDefinitionBuilder
+{
+ // Define the map function, strongly typed LINQ format
+ Map =
+ // Define the collection that will be indexed:
+ orders => from order in orders
+ // Define the index-entry:
+ select new
+ {
+ // Define the index-fields within each index-entry:
+ Employee = order.Employee,
+ Company = order.Company,
+ Total = order.Lines.Sum(l => (l.Quantity * l.PricePerUnit) * (1 - l.Discount))
+ },
+
+ // Can provide other properties available on the IndexDefinitionBuilder class, e.g.:
+ DeploymentMode = IndexDeploymentMode.Rolling,
+ Priority = IndexPriority.High,
+ // Reduce = ..., etc.
+};
+
+// Generate index definition from builder
+// Pass the conventions, needed for building the Maps property
+var indexDefinition = builder.ToIndexDefinition(store.Conventions);
+
+// Optionally, set the index name, can use any string
+// If not provided then default name from builder is used, e.g.: "IndexDefinitionBuildersOfOrders"
+indexDefinition.Name = "OrdersByTotal";
+
+// Define the put indexes operation, pass the index definition
+// Note: multiple index definitions can be passed, see syntax below
+IMaintenanceOperation putIndexesOp = new PutIndexesOperation(indexDefinition);
+
+// Execute the operation by passing it to Maintenance.SendAsync
+await store.Maintenance.SendAsync(putIndexesOp);
+`}
+
+
+
+
+
+
+## Syntax
+
+
+
+{`public PutIndexesOperation(params IndexDefinition[] indexesToAdd)
+`}
+
+
+
+| Parameter | Type | Description |
+|------------------|----------------------------|----------------------------------|
+| **indexesToAdd** | `params IndexDefinition[]` | Definitions of indexes to deploy |
+
+
+
+| `IndexDefinition` parameter | Type | Description |
+|----------------------------------------------|-----------------------------------------|---------------------------------------------------------------------------------------------------------------------------|
+| Name | `string` | Name of the index, a unique identifier |
+| Maps | `HashSet` | All the map functions for the index |
+| Reduce | `string` | The index reduce function |
+| DeploymentMode | `IndexDeploymentMode?` | Deployment mode (Parallel, Rolling) |
+| State | `IndexState?` | State of index (Normal, Disabled, Idle, Error) |
+| Priority | `IndexPriority?` | Priority of index (Low, Normal, High) |
+| LockMode | `IndexLockMode?` | Lock mode of index (Unlock, LockedIgnore, LockedError) |
+| Fields | `Dictionary` | _IndexFieldOptions_ per index field |
+| AdditionalSources | `Dictionary` | Additional code files to be compiled with this index |
+| AdditionalAssemblies | `HashSet` | Additional assemblies that are referenced |
+| Configuration | `IndexConfiguration` | Can override [indexing configuration](../../../../server/configuration/indexing-configuration.mdx) by setting this dictionary |
+| OutputReduceToCollection | `string` | A collection name for saving the reduce results as documents |
+| ReduceOutputIndex | `long?` | This number will be part of the reduce results documents IDs |
+| PatternForOutputReduceToCollectionReferences | `string` | Pattern for documents IDs which reference IDs of reduce results documents |
+| PatternReferencesCollectionName | `string` | A collection name for the reference documents created based on provided pattern |
+
+| `store.Maintenance.Send(putIndexesOp)` return value | Description |
+|-------------------------------------------------------|------------------------------------|
+| `PutIndexResult[]` | List of _PutIndexResult_ per index |
+
+| `PutIndexResult` parameter | Type | Description |
+|-----------------------------|----------|-----------------------------------------|
+| Index | `string` | Name of the index that was added |
+| RaftCommandIndex | `long` | Index of raft command that was executed |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_put-indexes-java.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_put-indexes-java.mdx
new file mode 100644
index 0000000000..39ad127c02
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_put-indexes-java.mdx
@@ -0,0 +1,59 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+**PutIndexesOperation** is used to insert indexes into a database.
+
+### Syntax
+
+
+
+{`PutIndexesOperation(IndexDefinition... indexToAdd)
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **indexToAdd** | `IndexDefinition...` | Definitions of indexes |
+
+| Return Value | |
+| ------------- | ----- |
+| PutIndexResult[] | List of created indexes |
+
+### Example I
+
+
+
+{`IndexDefinition indexDefinition = new IndexDefinition();
+indexDefinition.setMaps(Collections.singleton("from order in docs.Orders select new \{ " +
+ " order.Employee," +
+ " order.Company," +
+ " Total = order.Lines.Sum(l => (l.Quantity * l.PricePerUnit) * (1 - l.Discount))" +
+ "\}"));
+
+store.maintenance().send(new PutIndexesOperation(indexDefinition));
+`}
+
+
+
+### Example II
+
+
+
+{`IndexDefinitionBuilder builder = new IndexDefinitionBuilder();
+builder.setMap("from order in docs.Orders select new \{ " +
+ " order.Employee," +
+ " order.Company," +
+ " Total = order.Lines.Sum(l => (l.Quantity * l.PricePerUnit) * (1 - l.Discount))" +
+ "\}");
+
+IndexDefinition definition = builder.toIndexDefinition(store.getConventions());
+store.maintenance()
+ .send(new PutIndexesOperation(definition));
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_put-indexes-nodejs.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_put-indexes-nodejs.mdx
new file mode 100644
index 0000000000..bf8450cb65
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_put-indexes-nodejs.mdx
@@ -0,0 +1,188 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* There are a few ways to create and deploy indexes in a database.
+
+* This page describes deploying a **static-index** using the `PutIndexesOperation` Operation.
+ For a general description of Operations see [what are operations](../../../../client-api/operations/what-are-operations.mdx).
+
+* In this page:
+ * [Ways to deploy indexes - short summary](../../../../client-api/operations/maintenance/indexes/put-indexes.mdx#ways-to-deploy-indexes---short-summary)
+ * [Put indexes operation with IndexDefinition](../../../../client-api/operations/maintenance/indexes/put-indexes.mdx#put-indexes-operation-with-indexdefinition)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/put-indexes.mdx#syntax)
+
+
+## Ways to deploy indexes - short summary
+
+
+
+##### Static-indexes:
+
+There are a few ways to deploy a static-index from the Client API:
+
+* The following methods are explained in section [Deploy a static-index](../../../../indexes/creating-and-deploying.mdx#deploy-a-static-index):
+ * Call `execute()` on a specific index instance.
+ * Call `executeIndex()` or `executeIndexes()` on your _DocumentStore_ object.
+ * Call `IndexCreation.createIndexes()`.
+
+* Alternatively, you can execute the `PutIndexesOperation` maintenance operation on the _DocumentStore_, **as explained below**.
+
+
+
+
+##### Auto-indexes:
+
+* An auto-index is created by the server when making a filtering query that doesn't specify which index to use.
+ Learn more in [Creating auto indexes](../../../../indexes/creating-and-deploying.mdx#auto-indexes).
+
+
+
+
+## Put indexes operation with IndexDefinition
+
+Using `PutIndexesOperation` with **IndexDefinition** allows you to:
+
+ * Choose any name for the index.
+ This string-based name is specified when querying the index.
+ * Set low-level properties available in _IndexDefinition_.
+
+
+
+
+{`// Create an index definition
+const indexDefinition = new IndexDefinition();
+
+// Name is mandatory, can use any string
+indexDefinition.name = "OrdersByTotal";
+
+// Define the index map functions, string format
+// A single string for a map-index, multiple strings for a multi-map-index
+indexDefinition.maps = new Set([\`
+ // Define the collection that will be indexed:
+ from order in docs.Orders
+
+ // Define the index-entry:
+ select new
+ {
+ // Define the index-fields within each index-entry:
+ Employee = order.Employee,
+ Company = order.Company,
+ Total = order.Lines.Sum(l => (l.Quantity * l.PricePerUnit) * (1 - l.Discount))
+ }\`
+]);
+
+ // indexDefinition.reduce = ...
+
+// Can provide other index definitions available on the IndexDefinition class
+// Override the default values, e.g.:
+indexDefinition.deploymentMode = "Rolling";
+indexDefinition.priority = "High";
+indexDefinition.configuration = {
+ "Indexing.IndexMissingFieldsAsNull": "true"
+};
+// See all available properties in syntax below
+
+// Define the put indexes operation, pass the index definition
+// Note: multiple index definitions can be passed, see syntax below
+const putIndexesOp = new PutIndexesOperation(indexDefinition);
+
+// Execute the operation by passing it to maintenance.send
+await documentStore.maintenance.send(putIndexesOp);
+`}
+
+
+
+
+{`// Create an index definition
+const indexDefinition = new IndexDefinition();
+
+// Name is mandatory, can use any string
+indexDefinition.name = "OrdersByTotal";
+
+// Define the index map functions, string format
+// A single string for a map-index, multiple strings for a multi-map-index
+indexDefinition.maps = new Set([\`
+ map('Orders', function(order) {
+ return {
+ Employee: order.Employee,
+ Company: order.Company,
+ Total: order.Lines.reduce(function(sum, l) {
+ return sum + (l.Quantity * l.PricePerUnit) * (1 - l.Discount);
+ }, 0)
+ };
+ });\`
+]);
+
+// indexDefinition.reduce = ...
+
+// Can provide other index definitions available on the IndexDefinition class
+// Override the default values, e.g.:
+indexDefinition.deploymentMode = "Rolling";
+indexDefinition.priority = "High";
+indexDefinition.configuration = {
+ "Indexing.IndexMissingFieldsAsNull": "true"
+};
+// See all available properties in syntax below
+
+// Define the put indexes operation, pass the index definition
+// Note: multiple index definitions can be passed, see syntax below
+const putIndexesOp = new PutIndexesOperation(indexDefinition);
+
+// Execute the operation by passing it to maintenance.send
+await documentStore.maintenance.send(putIndexesOp);
+`}
+
+
+
+
+
+
+## Syntax
+
+
+
+{`const putIndexesOperation = new PutIndexesOperation(indexesToAdd);
+`}
+
+
+
+| Parameter | Type | Description |
+|------------------|------------------------|----------------------------------|
+| **indexesToAdd** | `...IndexDefinition[]` | Definitions of indexes to deploy |
+
+
+
+| `IndexDefinition` parameter | Type | Description |
+|----------------------------------------------|--------------------------|---------------------------------------------------------------------------------------------------------------------------------------------|
+| name | `string` | Name of the index, a unique identifier |
+| maps | `Set` | All the map functions for the index |
+| reduce | `string` | The index reduce function |
+| deploymentMode | `object` | Deployment mode (Parallel, Rolling) |
+| state | `object` | State of index (Normal, Disabled, Idle, Error) |
+| priority | `object` | Priority of index (Low, Normal, High) |
+| lockMode | `object` | Lock mode of index (Unlock, LockedIgnore, LockedError) |
+| fields | `Record` | _IndexFieldOptions_ per index field |
+| additionalSources | `Record` | Additional code files to be compiled with this index |
+| additionalAssemblies | `object[]` | Additional assemblies that are referenced |
+| configuration | `object` | Can override [indexing configuration](../../../../server/configuration/indexing-configuration.mdx) by setting this Record<string, string> |
+| outputReduceToCollection | `string` | A collection name for saving the reduce results as documents |
+| reduceOutputIndex | `number` | This number will be part of the reduce results documents IDs |
+| patternForOutputReduceToCollectionReferences | `string` | Pattern for documents IDs which reference IDs of reduce results documents |
+| patternReferencesCollectionName | `string` | A collection name for the reference documents created based on provided pattern |
+
+| `store.maintenance.send(putIndexesOp)` return value | Description |
+|------------------------------------------------------|----------------------------|
+| `object[]` | operation result per index |
+
+| Operation result per index | Type | Description |
+|-----------------------------|----------|-----------------------------------------|
+| index | `string` | Name of the index that was added |
+| raftCommandIndex | `long` | Index of raft command that was executed |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_put-indexes-php.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_put-indexes-php.mdx
new file mode 100644
index 0000000000..164976e0f2
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_put-indexes-php.mdx
@@ -0,0 +1,226 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* There are a few ways to create and deploy indexes in a database.
+
+* This page describes deploying a **static-index** using the `PutIndexesOperation` Operation.
+ For a general description of Operations see [what are operations](../../../../client-api/operations/what-are-operations.mdx).
+
+* In this page:
+ * [Ways to deploy indexes - short summary](../../../../client-api/operations/maintenance/indexes/put-indexes.mdx#ways-to-deploy-indexes---short-summary)
+ * [Put indexes operation with IndexDefinition](../../../../client-api/operations/maintenance/indexes/put-indexes.mdx#put-indexes-operation-with-indexdefinition)
+ * [Put indexes operation with IndexDefinitionBuilder](../../../../client-api/operations/maintenance/indexes/put-indexes.mdx#put-indexes-operation-with-indexdefinitionbuilder)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/put-indexes.mdx#syntax)
+
+
+## Ways to deploy indexes - short summary
+
+#### Static index:
+
+There are a few ways to deploy a static-index from the Client API:
+
+ * Call `execute()` on a specific index instance
+ * Call `IndexCreation.create_indexes()` to deploy multiple indexes
+ * Execute `PutIndexesOperation` maintenance operation on the Document Store - see below
+ * Learn more in [static indexes](../../../../indexes/creating-and-deploying.mdx#static-indexes)
+
+#### Auto index:
+
+ * An auto-index is created by the server when making a filtering query that doesn't specify which index to use
+ * Learn more in [auto indexes](../../../../indexes/creating-and-deploying.mdx#auto-indexes)
+
+
+
+## Put indexes operation with IndexDefinition
+
+Using `PutIndexesOperation` with **IndexDefinition** allows the following:
+
+ * Choosing any name for the index.
+ * Setting low-level properties available in _IndexDefinition_.
+
+
+
+
+{`// Create an index definition
+$indexDefinition = new IndexDefinition();
+
+// Name is mandatory, can use any string
+$indexDefinition->setName("OrdersByTotal");
+
+// Define the index Map functions, string format
+// A single string for a map-index, multiple strings for a multi-map-index
+$indexDefinition->setMaps([
+ "// Define the collection that will be indexed:" .
+ "from order in docs.Orders" .
+ " // Define the index-entry:" .
+ " select new" .
+ " {" .
+ " // Define the index-fields within each index-entry:" .
+ " Employee = order.Employee," .
+ " Company = order.Company," .
+ " Total = order.Lines.Sum(l => (l.Quantity * l.PricePerUnit) * (1 - l.Discount))" .
+ " }"
+]);
+
+// $indexDefinition->setReduce(...);
+
+// Can provide other index definitions available on the IndexDefinition class
+// Override the default values, e.g.:
+$indexDefinition->setDeploymentMode(IndexDeploymentMode::rolling());
+$indexDefinition->setPriority(IndexPriority::high());
+
+$configuration = new IndexConfiguration();
+$configuration->offsetSet("Indexing.IndexMissingFieldsAsNull", "true");
+$indexDefinition->setConfiguration($configuration);
+
+// See all available properties in syntax below
+
+// Define the put indexes operation, pass the index definition
+// Note: multiple index definitions can be passed, see syntax below
+$putIndexesOp = new PutIndexesOperation($indexDefinition);
+
+// Execute the operation by passing it to Maintenance.Send
+$store->maintenance()->send($putIndexesOp);
+`}
+
+
+
+
+{`// Create an index definition
+$indexDefinition = new IndexDefinition();
+
+// Name is mandatory, can use any string
+$indexDefinition->setName("OrdersByTotal");
+
+// Define the index Map functions, string format
+// A single string for a map-index, multiple strings for a multi-map-index
+$indexDefinition->setMaps([
+ "map('Orders', function(order) {" .
+ " return {" .
+ " Employee: order.Employee," .
+ " Company: order.Company," .
+ " Total: order.Lines.reduce(function(sum, l) {" .
+ " return sum + (l.Quantity * l.PricePerUnit) * (1 - l.Discount);" .
+ " }, 0)" .
+ " };" .
+ "});"
+]);
+
+// $indexDefinition->setReduce(...);
+
+// Can provide other index definitions available on the IndexDefinition class
+// Override the default values, e.g.:
+
+$indexDefinition->setDeploymentMode(IndexDeploymentMode::rolling());
+$indexDefinition->setPriority(IndexPriority::high());
+
+$configuration = new IndexConfiguration();
+$configuration->offsetSet("Indexing.IndexMissingFieldsAsNull", "true");
+$indexDefinition->setConfiguration($configuration);
+// See all available properties in syntax below
+
+// Define the put indexes operation, pass the index definition
+// Note: multiple index definitions can be passed, see syntax below
+$putIndexesOp = new PutIndexesOperation($indexDefinition);
+
+// Execute the operation by passing it to Maintenance.Send
+$store->maintenance()->send($putIndexesOp);
+`}
+
+
+
+
+
+
+## Put indexes operation with IndexDefinitionBuilder
+
+* Using `PutIndexesOperation` with an IndexDefinition created from an **IndexDefinitionBuilder**
+ allows setting low-level properties available in _IndexDefinitionBuilder_.
+
+* Note that only map or map-reduce indexes can be generated by the _IndexDefinitionBuilder_.
+ To generate multi-map indexes, use the above _IndexDefinition_ option.
+
+
+
+{`// Create an index definition builder
+$builder = new IndexDefinitionBuilder();
+$builder->setMap(
+ "// Define the collection that will be indexed:" .
+ " from order in docs.Orders" .
+ " // Define the index-entry:" .
+ " select new" .
+ " \{" .
+ " // Define the index-fields within each index-entry:" .
+ " Employee = order.Employee," .
+ " Company = order.Company," .
+ " Total = order.Lines.Sum(l => (l.Quantity * l.PricePerUnit) * (1 - l.Discount))" .
+ " \} "
+);
+
+// Can provide other properties available on the IndexDefinitionBuilder class, e.g.:
+$builder->setDeploymentMode(IndexDeploymentMode::rolling());
+$builder->setPriority(IndexPriority::high());
+// $builder->setReduce(...);
+
+// Generate index definition from builder
+// Pass the conventions, needed for building the Maps property
+$indexDefinition = $builder->toIndexDefinition($store->getConventions());
+
+// Optionally, set the index name, can use any string
+// If not provided then default name from builder is used, e.g.: "IndexDefinitionBuildersOfOrders"
+$indexDefinition->setName("OrdersByTotal");
+
+// Define the put indexes operation, pass the index definition
+// Note: multiple index definitions can be passed, see syntax below
+$putIndexesOp = new PutIndexesOperation($indexDefinition);
+
+// Execute the operation by passing it to maintenance.send
+$store->maintenance()->send($putIndexesOp);
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`PutIndexesOperation(IndexDefinition|IndexDefinitionArray|array ...$indexToAdd)
+`}
+
+
+
+| Parameters | Type | Description |
+| - |- | - |
+| **$indexToAdd** | `IndexDefinition` `IndexDefinitionArray` `array`| Definitions of indexes to deploy |
+
+
+
+| `IndexDefinition` parameter| Type | Description |
+| - |- | - |
+| **$name** | `?string` | Name of the index, a unique identifier |
+| **$state** | `?IndexState` | State of index (NORMAL, DISABLED, IDLE, ERROR) |
+| **$priority** | `?IndexPriority` | Priority of index (LOW, NORMAL, HIGH) |
+| **$maps** | `?StringSet` | All the map functions for the index |
+| **$reduce** | `?string` | The index reduce function |
+| **$deploymentMode** | `?IndexDeploymentMode` | Deployment mode (`parallel`, `rolling`) |
+| **$lockMode** | `?IndexLockMode` | Lock mode of index (`Unlock`, `LockedIgnore`, `LockedError`) |
+| **$fields** | `?IndexFieldOptionsArray` | _IndexFieldOptions_ per index field |
+| **$additionalSources** | `?AdditionalSourcesArray` | Additional code files to be compiled with this index |
+| **$additionalAssemblies** | `?AdditionalAssemblySet` | Additional assemblies that are referenced |
+| **$configuration** | `?IndexConfiguration` | Can override [indexing configuration](../../../../server/configuration/indexing-configuration.mdx) by setting this dictionary |
+| **$outputReduceToCollection** | `?string` | A collection name for saving the reduce results as documents |
+| **$reduceOutputIndex** | `?int` | This number will be part of the reduce results documents IDs |
+| **$patternForOutputReduceToCollectionReferences** | `?string` | Pattern for documents IDs which reference IDs of reduce results documents |
+| **$patternReferencesCollectionName** | `?string` | A collection name for the reference documents created based on provided pattern |
+| **$sourceType** | `?IndexSourceType` | Index source type (`None`, `Documents`, `TimeSeries`, `Counters`) |
+| **$type** | `?IndexType` | Index type (`None`, `AutoMap`, `AutoMapReduce`, `Map`, `MapReduce`, `Faulty`, `JavaScriptMap`, `JavaScriptMapReduce`) |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_put-indexes-python.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_put-indexes-python.mdx
new file mode 100644
index 0000000000..71163415df
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_put-indexes-python.mdx
@@ -0,0 +1,224 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* There are a few ways to create and deploy indexes in a database.
+
+* This page describes deploying a **static-index** using the `PutIndexesOperation` Operation.
+ For a general description of Operations see [what are operations](../../../../client-api/operations/what-are-operations.mdx).
+
+* In this page:
+ * [Ways to deploy indexes - short summary](../../../../client-api/operations/maintenance/indexes/put-indexes.mdx#ways-to-deploy-indexes---short-summary)
+ * [Put indexes operation with IndexDefinition](../../../../client-api/operations/maintenance/indexes/put-indexes.mdx#put-indexes-operation-with-indexdefinition)
+ * [Put indexes operation with IndexDefinitionBuilder](../../../../client-api/operations/maintenance/indexes/put-indexes.mdx#put-indexes-operation-with-indexdefinitionbuilder)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/put-indexes.mdx#syntax)
+
+
+## Ways to deploy indexes - short summary
+
+#### Static index:
+
+There are a few ways to deploy a static-index from the Client API:
+
+ * Call `execute()` on a specific index instance
+ * Call `IndexCreation.create_indexes()` to deploy multiple indexes
+ * Execute `PutIndexesOperation` maintenance operation on the Document Store - see below
+ * Learn more in [static indexes](../../../../indexes/creating-and-deploying.mdx#static-indexes)
+
+#### Auto index:
+
+ * An auto-index is created by the server when making a filtering query that doesn't specify which index to use
+ * Learn more in [auto indexes](../../../../indexes/creating-and-deploying.mdx#auto-indexes)
+
+
+
+## Put indexes operation with IndexDefinition
+
+Using `PutIndexesOperation` with **IndexDefinition** allows the following:
+
+ * Choosing any name for the index.
+ * Setting low-level properties available in _IndexDefinition_.
+
+
+
+
+{`# Create an index definition
+index_definition = IndexDefinition(
+ # Name is mandatory, can use any string
+ name="OrdersByTotal",
+ # Define the index Map functions, string format
+ # A single string for a map-index, multiple strings for a multi-map-index
+ maps={
+ """
+ // Define the collection that will be indexed:
+ from order in docs.Orders
+
+ // Define the index-entry:
+ select new
+ {
+ // Define the index-fields within each index-entry:
+ Employee = order.Employee,
+ Company = order.Company,
+ Total = order.Lines.Sum(l => (l.Quantity * l.PricePerUnit) * (1 - l.Discount))
+ }
+ """
+ },
+ # reduce = ...
+ # Can provide other index definitions available on the IndexDefinition class
+ # Override the default values, e.g.:
+ deployment_mode=IndexDeploymentMode.ROLLING,
+ priority=IndexPriority.HIGH,
+ configuration={"Indexing.IndexMissingFieldsAsNull": "true"},
+ # See all available properties in syntax below
+)
+
+# Define the put indexes operation, pass the index definition
+# Note: multiple index definitions can be passed, see syntax below
+put_indexes_op = PutIndexesOperation(index_definition)
+
+# Execute the operation by passing it to maintenance.send
+store.maintenance.send(put_indexes_op)
+`}
+
+
+
+
+{`# Create an index definition
+index_definition = IndexDefinition(
+ # Name is mandatory, can use any string
+ name="OrdersByTotal",
+ # Define the index map functions, string format
+ # A single string for a map-index, multiple strings for a multimap index
+ maps={
+ """
+ map('Orders', function(order) {
+ return {
+ Employee: order.Employee,
+ Company: order.Company,
+ Total: order.Lines.reduce(function(sum, l) {
+ return sum + (l.Quantity * l.PricePerUnit) * (1 - l.Discount);
+ }, 0)
+ };
+ });
+ """
+ },
+ # reduce = ...,
+ # Can provide other index definitions available on the IndexDefinition class
+ # Override the default values, e.g.:
+ deployment_mode=IndexDeploymentMode.ROLLING,
+ priority=IndexPriority.HIGH,
+ configuration={"Indexing.IndexMissingFieldsAsNull": "true"},
+ # See all available properties in syntax below
+)
+# Define the put indexes operation, pass the index definition
+# Note: multiple index definitions can be passed, see syntax below
+put_indexes_op = PutIndexesOperation(index_definition)
+
+# Execute the operation by passing it to Maintenance.Send
+store.maintenance.send(put_indexes_op)
+`}
+
+
+
+
+
+
+## Put indexes operation with IndexDefinitionBuilder
+
+* Using `PutIndexesOperation` with an IndexDefinition created from an **IndexDefinitionBuilder**
+ allows setting low-level properties available in _IndexDefinitionBuilder_.
+
+* Note that only map or map-reduce indexes can be generated by the _IndexDefinitionBuilder_.
+ To generate multi-map indexes, use the above _IndexDefinition_ option.
+
+
+
+{`# Create an index definition builder
+builder = IndexDefinitionBuilder()
+builder.map = """
+ // Define the collection that will be indexed:
+ from order in docs.Orders
+
+ // Define the index-entry:
+ select new
+ \{
+ // Define the index-fields within each index-entry:
+ Employee = order.Employee,
+ Company = order.Company,
+ Total = order.Lines.Sum(l => (l.Quantity * l.PricePerUnit) * (1 - l.Discount))
+ \}
+ """
+# Can provide other properties available on the IndexDefinitionBuilder class, e.g.:
+builder.deployment_mode = IndexDeploymentMode.ROLLING
+builder.priority = IndexPriority.HIGH
+# builder.reduce = ..., etc.
+
+# Generate index definition from builder
+# Pass the conventions, needed for building the maps property
+builder.to_index_definition(store.conventions)
+
+# Optionally, set the index name, can use any string
+# If not provided then default name from builder is used, e.g.: "IndexDefinitionBuildersOfOrders"
+index_definition.name = "OrdersByTotal"
+
+# Define the put indexes operation, pass the index definition
+# Note: multiple index definitions can be passed, see syntax below
+put_indexes_op = PutIndexesOperation(index_definition)
+
+# Execute the operation by passing it to Maintenance.Send
+store.maintenance.send(put_indexes_op)
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`class PutIndexesOperation(MaintenanceOperation):
+ def __init__(self, *indexes_to_add: IndexDefinition): ...
+`}
+
+
+
+| Parameters | Type | Description |
+| - |- | - |
+| **\*indexes_to_add** | `IndexDefinition` | Definitions of indexes to deploy |
+
+
+
+| `IndexDefinition` parameter| Type | Description |
+| - |- | - |
+| **name** | `str` | Name of the index, a unique identifier |
+| **maps** | `Set[str]` | All the map functions for the index |
+| **reduce** | `str` | The index reduce function |
+| **deployment_mode** | `IndexDeploymentMode` | Deployment mode (PARALLEL, ROLLING) |
+| **state** | `IndexState` | State of index (NORMAL, DISABLED, IDLE, ERROR) |
+| **priority** | `IndexPriority` | Priority of index (LOW, NORMAL, HIGH) |
+| **lock_mode** | `IndexLockMode` | Lock mode of index (UNLOCK, LOCKED_IGNORE, LOCKED_ERROR) |
+| **fields** | `Dict[str, IndexFieldOptions]` | _IndexFieldOptions_ per index field |
+| **additional_sources** | `Dict[str, str]` | Additional code files to be compiled with this index |
+| **additional_assemblies** | `Set[AdditionalAssembly]` | Additional assemblies that are referenced |
+| **configuration** | `IndexConfiguration` | Can override [indexing configuration](../../../../server/configuration/indexing-configuration.mdx) by setting this dictionary |
+| **output_reduce_to_collection** | `str` | A collection name for saving the reduce results as documents |
+| **reduce_output_index** | `int` | This number will be part of the reduce results documents IDs |
+| **pattern_for_output_reduce_to_collection_references** | `str` | Pattern for documents IDs which reference IDs of reduce results documents |
+| **pattern_references_collection_name** | `str` | A collection name for the reference documents created based on provided pattern |
+
+| `store.maintenance.send(put_indexes_op)` return value | Description |
+| - | - |
+| `List[PutIndexResult]` | List of _PutIndexResult_ per index |
+
+| `PutIndexResult` parameter | Type | Description |
+| - | - | - |
+| **index** | `str` | Name of the index that was added |
+| **raft_command_index** | `int` | Index of raft command that was executed |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_reset-index-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_reset-index-csharp.mdx
new file mode 100644
index 0000000000..b4faca553c
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_reset-index-csharp.mdx
@@ -0,0 +1,74 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `ResetIndexOperation` to rebuild an index:
+ * All existing indexed data will be removed.
+ * All items matched by the index definition will be re-indexed.
+
+* **Indexes scope**:
+ * Both static and auto indexes can be reset.
+
+* **Nodes scope**:
+ * When resetting an index from the **client**:
+ The index is reset only on the preferred node only, and Not on all the database-group nodes.
+ * When resetting an index from the **Studio** [indexes list](../../../../studio/database/indexes/indexes-list-view.mdx#indexes-list-view---actions) view:
+ The index is reset on the local node the browser is opened on, even if it is Not the preferred node.
+
+* If the index is [disabled](../../../../client-api/operations/maintenance/indexes/disable-index.mdx)
+ or [paused](../../../../client-api/operations/maintenance/indexes/stop-index.mdx), resetting the index
+ will put it back to the **normal** running state on the local node where the action was performed.
+
+* In this page:
+ * [Reset index](../../../../client-api/operations/maintenance/indexes/set-index-priority.mdx#set-priority---single-index)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/set-index-priority.mdx#syntax)
+
+
+## Reset index
+
+
+
+
+{`// Define the reset index operation, pass index name
+var resetIndexOp = new ResetIndexOperation("Orders/Totals");
+
+// Execute the operation by passing it to Maintenance.Send
+// An exception will be thrown if index does not exist
+store.Maintenance.Send(resetIndexOp);
+`}
+
+
+
+
+{`// Define the reset index operation, pass index name
+var resetIndexOp = new ResetIndexOperation("Orders/Totals");
+
+// Execute the operation by passing it to Maintenance.SendAsync
+// An exception will be thrown if index does not exist
+await store.Maintenance.SendAsync(resetIndexOp);
+`}
+
+
+
+
+
+
+## Syntax
+
+
+
+{`public ResetIndexOperation(string indexName);
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **indexName** | `string` | Name of an index to reset |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_reset-index-java.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_reset-index-java.mdx
new file mode 100644
index 0000000000..f9d3604916
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_reset-index-java.mdx
@@ -0,0 +1,32 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+**ResetIndexOperation** will remove all indexing data from a server for a given index so the indexation can start from scratch for that index.
+
+## Syntax
+
+
+
+{`public ResetIndexOperation(String indexName)
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **indexName** | String | name of an index to reset |
+
+
+## Example
+
+
+
+{`store.maintenance()
+ .send(new ResetIndexOperation("Orders/Totals"));
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_reset-index-nodejs.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_reset-index-nodejs.mdx
new file mode 100644
index 0000000000..b0a26d5002
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_reset-index-nodejs.mdx
@@ -0,0 +1,61 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `ResetIndexOperation` to rebuild an index:
+ * All existing indexed data will be removed.
+ * All items matched by the index definition will be re-indexed.
+
+* **Indexes scope**:
+ * Both static and auto indexes can be reset.
+
+* **Nodes scope**:
+ * When resetting an index from the **client**:
+ The index is reset only on the preferred node only, and Not on all the database-group nodes.
+ * When resetting an index from the **Studio** [indexes list](../../../../studio/database/indexes/indexes-list-view.mdx#indexes-list-view---actions) view:
+ The index is reset on the local node the browser is opened on, even if it is Not the preferred node.
+
+* If the index is [disabled](../../../../client-api/operations/maintenance/indexes/disable-index.mdx)
+ or [paused](../../../../client-api/operations/maintenance/indexes/stop-index.mdx), resetting the index
+ will put it back to the **normal** running state on the local node where the action was performed.
+
+* In this page:
+ * [Reset index](../../../../client-api/operations/maintenance/indexes/set-index-priority.mdx#set-priority---single-index)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/set-index-priority.mdx#syntax)
+
+
+## Reset index
+
+
+
+{`// Define the reset index operation, pass index name
+const resetIndexOp = new ResetIndexOperation("Orders/Totals");
+
+// Execute the operation by passing it to maintenance.send
+// An exception will be thrown if index does not exist
+await store.maintenance.send(resetIndexOp);
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`const resetIndexOp = new ResetIndexOperation(indexName);
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **indexName** | `string` | Name of an index to reset |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_reset-index-php.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_reset-index-php.mdx
new file mode 100644
index 0000000000..6ebf10ead6
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_reset-index-php.mdx
@@ -0,0 +1,61 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `ResetIndexOperation` to rebuild an index:
+ * All existing indexed data will be removed.
+ * All items matched by the index definition will be re-indexed.
+
+* **Indexes scope**:
+ * Both static and auto indexes can be reset.
+
+* **Nodes scope**:
+ * When resetting an index from the **client**:
+ The index is reset only on the preferred node only, and Not on all the database-group nodes.
+ * When resetting an index from the **Studio** [indexes list](../../../../studio/database/indexes/indexes-list-view.mdx#indexes-list-view---actions) view:
+ The index is reset on the local node the browser is opened on, even if it is Not the preferred node.
+
+* If the index is [disabled](../../../../client-api/operations/maintenance/indexes/disable-index.mdx)
+ or [paused](../../../../client-api/operations/maintenance/indexes/stop-index.mdx), resetting the index
+ will put it back to the **normal** running state on the local node where the action was performed.
+
+* In this page:
+ * [Reset index](../../../../client-api/operations/maintenance/indexes/reset-index.mdx#reset-index)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/reset-index.mdx#syntax)
+
+
+## Reset index
+
+
+
+{`// Define the reset index operation, pass index name
+$resetIndexOp = new ResetIndexOperation("Orders/Totals");
+
+// Execute the operation by passing it to Maintenance.SendAsync
+// An exception will be thrown if index does not exist
+$store->maintenance()->send($resetIndexOp);
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`public ResetIndexOperation(?string $indexName);
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **$indexName** | `?string` | Name of an index to reset |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_reset-index-python.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_reset-index-python.mdx
new file mode 100644
index 0000000000..1d3a205e79
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_reset-index-python.mdx
@@ -0,0 +1,62 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `ResetIndexOperation` to rebuild an index:
+ * All existing indexed data will be removed.
+ * All items matched by the index definition will be re-indexed.
+
+* **Indexes scope**:
+ * Both static and auto indexes can be reset.
+
+* **Nodes scope**:
+ * When resetting an index from the **client**:
+ The index is reset only on the preferred node only, and Not on all the database-group nodes.
+ * When resetting an index from the **Studio** [indexes list](../../../../studio/database/indexes/indexes-list-view.mdx#indexes-list-view---actions) view:
+ The index is reset on the local node the browser is opened on, even if it is Not the preferred node.
+
+* If the index is [disabled](../../../../client-api/operations/maintenance/indexes/disable-index.mdx)
+ or [paused](../../../../client-api/operations/maintenance/indexes/stop-index.mdx), resetting the index
+ will put it back to the **normal** running state on the local node where the action was performed.
+
+* In this page:
+ * [Reset index](../../../../client-api/operations/maintenance/indexes/set-index-priority.mdx#set-priority---single-index)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/set-index-priority.mdx#syntax)
+
+
+## Reset index
+
+
+
+{`# Define the reset index operation, pass index name
+reset_index_op = ResetIndexOperation("Orders/Totals")
+
+# Execute the operation by passing it to maintenance.send
+# An exception will be thrown if index does not exist
+store.operations.send(reset_index_op)
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`class ResetIndexOperation(VoidMaintenanceOperation):
+ def __init__(self, index_name: str): ...
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **index_name** | `str` | Name of an index to reset |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_set-index-lock-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_set-index-lock-csharp.mdx
new file mode 100644
index 0000000000..a0ad8d7fda
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_set-index-lock-csharp.mdx
@@ -0,0 +1,201 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* The lock mode controls the behavior of index modifications.
+ Use `SetIndexesLockOperation` to modify the **lock mode** for a single index or multiple indexes.
+
+* **Indexes scope**:
+ The lock mode can be set only for static-indexes, not for auto-indexes.
+
+* **Nodes scope**:
+ The lock mode will be updated on all nodes in the database group.
+
+* Setting the lock mode can also be done in the **Studio** [indexes list](../../../../studio/database/indexes/indexes-list-view.mdx#indexes-list-view---actions) view.
+ Locking an index is not a security measure, the index can be unlocked at any time.
+
+* In this page:
+ * [Lock modes](../../../../client-api/operations/maintenance/indexes/set-index-lock.mdx#lock-modes)
+ * [Sample usage flow](../../../../client-api/operations/maintenance/indexes/set-index-lock.mdx#sample-usage-flow)
+ * [Set lock mode - single index](../../../../client-api/operations/maintenance/indexes/set-index-lock.mdx#set-lock-mode---single-index)
+ * [Set lock mode - multiple indexes](../../../../client-api/operations/maintenance/indexes/set-index-lock.mdx#set-lock-mode---multiple-indexes)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/set-index-lock.mdx#syntax)
+
+
+## Lock modes
+
+* **Unlocked** - when lock mode is set to `Unlock`:
+ * Any change to the index definition will be applied.
+ * If the new index definition differs from the one stored on the server,
+ the index will be updated and the data will be re-indexed using the new index definition.
+ * The index can be deleted.
+
+* **Locked (ignore)** - when lock mode is set to `LockedIgnore`:
+ * Index definition changes will Not be applied.
+ * Modifying the index definition will return successfully and no error will be raised,
+ however, no change will be made to the index definition on the server.
+ * Trying to delete the index will not remove it from the server, and no error will be raised.
+
+* **Locked (error)** - when lock mode is set to `LockedError`:
+ * Index definitions changes will Not be applied.
+ * An exception will be thrown upon trying to modify or delete the index.
+ * The index cannot be deleted. Attempting to do so will result in an exception.
+
+
+
+## Sample usage flow
+
+Consider the following scenario:
+
+* Your client application defines and [deploys a static-index](../../../../client-api/operations/maintenance/indexes/put-indexes.mdx) upon application startup.
+
+* After the application has started, you make a change to your index definition and re-indexing occurs.
+ However, if the index lock mode is _'Unlock'_, the next time your application will start,
+ it will reset the index definition back to the original version.
+
+* Locking the index allows to make changes to the running index and prevents the application
+ from setting it back to the previous definition upon startup. See the following steps:
+
+
+ 1. Run your application
+ 2. Modify the index definition on the server (from Studio, or from another application),
+ and then set this index lock mode to `LockedIgnore`.
+ 3. A side-by-side replacement index is created on the server.
+ It will index your dataset according to the **new** definition.
+ 4. At this point, if any instance of your original application is started,
+ the code that defines and deploys the index upon startup will have no effect
+ since the index is 'locked'.
+ 5. Once the replacement index is done indexing, it will replace the original index.
+
+
+
+## Set lock mode - single index
+
+
+
+
+{`// Define the set lock mode operation
+// Pass index name & lock mode
+var setLockModeOp = new SetIndexesLockOperation("Orders/Totals", IndexLockMode.LockedIgnore);
+
+// Execute the operation by passing it to Maintenance.Send
+// An exception will be thrown if index does not exist
+store.Maintenance.Send(setLockModeOp);
+
+// Lock mode is now set to 'LockedIgnore'
+// Any modifications done now to the index will Not be applied, and will Not throw
+`}
+
+
+
+
+{`// Define the set lock mode operation
+// Pass index name & lock mode
+var setLockModeOp = new SetIndexesLockOperation("Orders/Totals", IndexLockMode.LockedIgnore);
+
+// Execute the operation by passing it to Maintenance.SendAsync
+// An exception will be thrown if index does not exist
+await store.Maintenance.SendAsync(setLockModeOp);
+
+// Lock mode is now set to 'LockedIgnore'
+// Any modifications done now to the index will Not be applied, and will Not throw
+`}
+
+
+
+
+
+
+## Set lock mode - multiple indexes
+
+
+
+
+{`// Define the index list and the new lock mode:
+var parameters = new SetIndexesLockOperation.Parameters {
+ IndexNames = new[] {"Orders/Totals", "Orders/ByCompany"},
+ Mode = IndexLockMode.LockedError
+};
+
+// Define the set lock mode operation, pass the parameters
+var setLockModeOp = new SetIndexesLockOperation(parameters);
+
+// Execute the operation by passing it to Maintenance.Send
+// An exception will be thrown if any of the specified indexes do not exist
+store.Maintenance.Send(setLockModeOp);
+
+// Lock mode is now set to 'LockedError' on both indexes
+// Any modifications done now to either index will throw
+`}
+
+
+
+
+{`// Define the index list and the new lock mode:
+var parameters = new SetIndexesLockOperation.Parameters {
+ IndexNames = new[] {"Orders/Totals", "Orders/ByCompany"},
+ Mode = IndexLockMode.LockedError
+};
+
+// Define the set lock mode operation, pass the parameters
+var setLockModeOp = new SetIndexesLockOperation(parameters);
+
+// Execute the operation by passing it to Maintenance.SendAsync
+// An exception will be thrown if any of the specified indexes do not exist
+await store.Maintenance.SendAsync(setLockModeOp);
+
+// Lock mode is now set to 'LockedError' on both indexes
+// Any modifications done now to either index will throw
+`}
+
+
+
+
+
+
+## Syntax
+
+
+
+{`// Available overloads:
+public SetIndexesLockOperation(string indexName, IndexLockMode mode);
+public SetIndexesLockOperation(Parameters parameters);
+`}
+
+
+
+| Parameters | Type | Description |
+|- | - | - |
+| **indexName** | string | Index name for which to set lock mode |
+| **mode** | `IndexLockMode` | Lock mode to set |
+| **parameters** | `SetIndexesLockOperation.Parameters` | List of indexes + Lock mode to set. An exception is thrown if any of the specified indexes do not exist. |
+
+
+
+{`public enum IndexLockMode
+\{
+ Unlock,
+ LockedIgnore,
+ LockedError
+\}
+`}
+
+
+
+
+
+{`public class Parameters
+\{
+ public string[] IndexNames \{ get; set; \}
+ public IndexLockMode Mode \{ get; set; \}
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_set-index-lock-java.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_set-index-lock-java.mdx
new file mode 100644
index 0000000000..467df95a03
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_set-index-lock-java.mdx
@@ -0,0 +1,83 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+**SetIndexesLockOperation** allows you to change index lock mode for a given index or indexes.
+
+## Syntax
+
+
+
+{`public SetIndexesLockOperation(String indexName, IndexLockMode mode)
+public SetIndexesLockOperation(SetIndexesLockOperation.Parameters parameters)
+`}
+
+
+
+
+
+{`public enum IndexLockMode \{
+ UNLOCK,
+ LOCKED_IGNORE,
+ LOCKED_ERROR
+\}
+`}
+
+
+
+
+
+{`public static class Parameters \{
+ private String[] indexNames;
+ private IndexLockMode mode;
+
+ public String[] getIndexNames() \{
+ return indexNames;
+ \}
+
+ public void setIndexNames(String[] indexNames) \{
+ this.indexNames = indexNames;
+ \}
+
+ public IndexLockMode getMode() \{
+ return mode;
+ \}
+
+ public void setMode(IndexLockMode mode) \{
+ this.mode = mode;
+ \}
+\}
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **name** | String | name of an index to change lock mode for |
+| **lockMode** | IndexLockMode | new index lock mode |
+| **parameters** | SetIndexesLockOperation.Parameters | list of indexes + new index lock mode |
+
+## Example I
+
+
+
+{`store.maintenance().send(new SetIndexesLockOperation("Orders/Totals", IndexLockMode.LOCKED_IGNORE));
+`}
+
+
+
+## Example II
+
+
+
+{`SetIndexesLockOperation.Parameters parameters = new SetIndexesLockOperation.Parameters();
+parameters.setIndexNames(new String[]\{ "Orders/Totals", "Orders/ByCompany" \});
+parameters.setMode(IndexLockMode.LOCKED_IGNORE);
+
+store.maintenance().send(new SetIndexesLockOperation(parameters));
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_set-index-lock-nodejs.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_set-index-lock-nodejs.mdx
new file mode 100644
index 0000000000..5c1684f5ab
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_set-index-lock-nodejs.mdx
@@ -0,0 +1,151 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* The lock mode controls the behavior of index modifications.
+ Use `SetIndexesLockOperation` to modify the **lock mode** for a single index or multiple indexes.
+
+* **Indexes scope**:
+ The lock mode can be set only for static-indexes, not for auto-indexes.
+
+* **Nodes scope**:
+ The lock mode will be updated on all nodes in the database group.
+
+* Setting the lock mode can also be done in the **Studio** [indexes list](../../../../studio/database/indexes/indexes-list-view.mdx#indexes-list-view---actions) view.
+ Locking an index is not a security measure, the index can be unlocked at any time.
+
+
+* In this page:
+ * [Lock modes](../../../../client-api/operations/maintenance/indexes/set-index-lock.mdx#lock-modes)
+ * [Sample usage flow](../../../../client-api/operations/maintenance/indexes/set-index-lock.mdx#sample-usage-flow)
+ * [Set lock mode - single index](../../../../client-api/operations/maintenance/indexes/set-index-lock.mdx#set-lock-mode---single-index)
+ * [Set lock mode - multiple indexes](../../../../client-api/operations/maintenance/indexes/set-index-lock.mdx#set-lock-mode---multiple-indexes)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/set-index-lock.mdx#syntax)
+
+
+## Lock modes
+
+* **Unlocked** - when lock mode is set to `Unlock`:
+ * Any change to the index definition will be applied.
+ * If the new index definition differs from the one stored on the server,
+ the index will be updated and the data will be re-indexed using the new index definition.
+ * The index can be deleted.
+
+* **Locked (ignore)** - when lock mode is set to `LockedIgnore`:
+ * Index definition changes will Not be applied.
+ * Modifying the index definition will return successfully and no error will be raised,
+ however, no change will be made to the index definition on the server.
+ * Trying to delete the index will not remove it from the server, and no error will be raised.
+
+* **Locked (error)** - when lock mode is set to `LockedError`:
+ * Index definitions changes will Not be applied.
+ * An exception will be thrown upon trying to modify the index.
+ * The index cannot be deleted. Attempting to do so will result in an exception.
+
+
+
+## Sample usage flow
+
+Consider the following scenario:
+
+* Your client application defines and [deploys a static-index](../../../../client-api/operations/maintenance/indexes/put-indexes.mdx) upon application startup.
+
+* After the application has started, you make a change to your index definition and re-indexing occurs.
+ However, if the index lock mode is _'Unlock'_, the next time your application will start,
+ it will reset the index definition back to the original version.
+
+* Locking the index allows to make changes to the running index and prevents the application
+ from setting it back to the previous definition upon startup. See the following steps:
+
+
+ 1. Run your application
+ 2. Modify the index definition on the server (from Studio, or from another application),
+ and then set this index lock mode to `LockedIgnore`.
+ 3. A side-by-side replacement index is created on the server.
+ It will index your dataset according to the **new** definition.
+ 4. At this point, if any instance of your original application is started,
+ the code that defines and deploys the index upon startup will have no effect
+ since the index is 'locked'.
+ 5. Once the replacement index is done indexing, it will replace the original index.
+
+
+
+## Set lock mode - single index
+
+
+
+{`// Define the set lock mode operation
+// Pass index name & lock mode
+const setLockModeOp = new SetIndexesLockOperation("Orders/Totals", "LockedIgnore");
+
+// Execute the operation by passing it to maintenance.send
+// An exception will be thrown if index does not exist
+await store.maintenance.send(setLockModeOp);
+
+// Lock mode is now set to 'LockedIgnore'
+// Any modifications done now to the index will Not be applied, and will Not throw
+`}
+
+
+
+
+
+## Set lock mode - multiple indexes
+
+
+
+{`// Define the index list and the new lock mode:
+const parameters = \{
+ indexNames: ["Orders/Totals", "Orders/ByCompany"],
+ mode: "LockedError"
+\}
+
+// Define the set lock mode operation, pass the parameters
+const setLockModeOp = new SetIndexesLockOperation(parameters);
+
+// Execute the operation by passing it to maintenance.send
+// An exception will be thrown if any of the specified indexes do not exist
+await store.maintenance.send(setLockModeOp);
+
+// Lock mode is now set to 'LockedError' on both indexes
+// Any modifications done now to either index will throw
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`// Available overloads:
+const setLockModeOp = new SetIndexesLockOperation(indexName, mode);
+const setLockModeOp = new SetIndexesLockOperation(parameters);
+`}
+
+
+
+| Parameters | Type | Description |
+|- | - | - |
+| **indexName** | string | Index name for which to set lock mode |
+| **mode** | `"Unlock"` / `"LockedIgnore"` / `"LockedError"` | Lock mode to set |
+| **parameters** | parameters object | List of indexes + lock mode to set. An exception is thrown if any of the specified indexes do not exist. |
+
+
+
+{`// parameters object
+\{
+ indexNames, // string[], list of index names
+ mode // Lock mode to set
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_set-index-lock-php.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_set-index-lock-php.mdx
new file mode 100644
index 0000000000..5a4059b043
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_set-index-lock-php.mdx
@@ -0,0 +1,155 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* The lock mode controls the behavior of index modifications.
+ Use `SetIndexesLockOperation` to modify the **lock mode** for a single index or multiple indexes.
+
+* **Indexes scope**:
+ The lock mode can be set only for static-indexes, not for auto-indexes.
+
+* **Nodes scope**:
+ The lock mode will be updated on all nodes in the database group.
+
+* Setting the lock mode can also be done in the **Studio** [indexes list](../../../../studio/database/indexes/indexes-list-view.mdx#indexes-list-view---actions) view.
+ Locking an index is not a security measure, the index can be unlocked at any time.
+
+* In this page:
+ * [Lock modes](../../../../client-api/operations/maintenance/indexes/set-index-lock.mdx#lock-modes)
+ * [Sample usage flow](../../../../client-api/operations/maintenance/indexes/set-index-lock.mdx#sample-usage-flow)
+ * [Set lock mode - single index](../../../../client-api/operations/maintenance/indexes/set-index-lock.mdx#set-lock-mode---single-index)
+ * [Set lock mode - multiple indexes](../../../../client-api/operations/maintenance/indexes/set-index-lock.mdx#set-lock-mode---multiple-indexes)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/set-index-lock.mdx#syntax)
+
+
+## Lock modes
+
+* **Unlocked** - when lock mode is set using `unlock()`:
+ * Any change to the index definition will be applied.
+ * If the new index definition differs from the one stored on the server,
+ the index will be updated and the data will be re-indexed using the new index definition.
+ * The index can be deleted.
+
+* **Locked (ignore)** - when lock mode is set using `lockedIgnore()`:
+ * Index definition changes will Not be applied.
+ * Modifying the index definition will return successfully and no error will be raised,
+ however, no change will be made to the index definition on the server.
+ * Trying to delete the index will not remove it from the server, and no error will be raised.
+
+* **Locked (error)** - when lock mode is set using `lockedError()`:
+ * Index definitions changes will Not be applied.
+ * An exception will be thrown upon trying to modify the index.
+ * The index cannot be deleted. Attempting to do so will result in an exception.
+
+
+
+## Sample usage flow
+
+Consider the following scenario:
+
+* Your client application defines and [deploys a static-index](../../../../client-api/operations/maintenance/indexes/put-indexes.mdx) upon application startup.
+
+* After the application has started, you make a change to your index definition and re-indexing occurs.
+ However, if the index lock mode is `unlock`, the next time your application will start,
+ it will reset the index definition back to the original version.
+
+* Locking the index allows to make changes to the running index and prevents the application
+ from setting it back to the previous definition upon startup. See the following steps:
+
+
+ 1. Run your application
+ 2. Modify the index definition on the server (from Studio, or from another application),
+ and then set this index lock mode to `lockedIgnore`.
+ 3. A side-by-side replacement index is created on the server.
+ It will index your dataset according to the **new** definition.
+ 4. At this point, if any instance of your original application is started,
+ the code that defines and deploys the index upon startup will have no effect
+ since the index is locked.
+ 5. Once the replacement index is done indexing, it will replace the original index.
+
+
+
+## Set lock mode - single index
+
+
+
+{`// Define the set lock mode operation
+// Pass index name & lock mode
+$setLockModeOp = new SetIndexesLockOperation("Orders/Totals", IndexLockMode::lockedIgnore());
+
+// Execute the operation by passing it to Maintenance.Send
+// An exception will be thrown if index does not exist
+$store->maintenance()->send($setLockModeOp);
+
+// Lock mode is now set to 'LockedIgnore'
+// Any modifications done now to the index will Not be applied, and will Not throw
+`}
+
+
+
+
+
+## Set lock mode - multiple indexes
+
+
+
+{`// Define the index list and the new lock mode:
+$parameters = new IndexLockParameters();
+$parameters->setIndexNames([ "Orders/Totals", "Orders/ByCompany" ]);
+$parameters->setMode(IndexLockMode::lockedError());
+
+// Define the set lock mode operation, pass the parameters
+$setLockModeOp = new SetIndexesLockOperation($parameters);
+
+// Execute the operation by passing it to Maintenance.Send
+// An exception will be thrown if any of the specified indexes do not exist
+$store->maintenance()->send($setLockModeOp);
+
+// Lock mode is now set to 'LockedError' on both indexes
+// Any modifications done now to either index will throw
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`// Available overloads:
+SetIndexesLockOperation(?string $indexName, ?IndexLockMode $mode);
+SetIndexesLockOperation(?Parameters $parameters);
+`}
+
+
+
+| Parameters | Type | Description |
+|- | - | - |
+| **$mode** | `?IndexLockMode` | Lock mode to set |
+| **$indexName** | `?string` | Index names to set lock mode for |
+| **$parameters** | `?Parameters` | Index lock parameters |
+
+
+
+{`class IndexLockMode
+\{
+ public static function unlock(): IndexLockMode;
+ public static function lockedIgnore(): IndexLockMode
+ public static function lockedError(): IndexLockMode;
+
+ public function isUnlock(): bool;
+ public function isLockedIgnore(): bool;
+ public function isLockedError(): bool;
+\}
+`}
+
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_set-index-lock-python.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_set-index-lock-python.mdx
new file mode 100644
index 0000000000..bc594b1f55
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_set-index-lock-python.mdx
@@ -0,0 +1,145 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* The lock mode controls the behavior of index modifications.
+ Use `SetIndexesLockOperation` to modify the **lock mode** for a single index or multiple indexes.
+
+* **Indexes scope**:
+ The lock mode can be set only for static-indexes, not for auto-indexes.
+
+* **Nodes scope**:
+ The lock mode will be updated on all nodes in the database group.
+
+* Setting the lock mode can also be done in the **Studio** [indexes list](../../../../studio/database/indexes/indexes-list-view.mdx#indexes-list-view---actions) view.
+ Locking an index is not a security measure, the index can be unlocked at any time.
+
+* In this page:
+ * [Lock modes](../../../../client-api/operations/maintenance/indexes/set-index-lock.mdx#lock-modes)
+ * [Sample usage flow](../../../../client-api/operations/maintenance/indexes/set-index-lock.mdx#sample-usage-flow)
+ * [Set lock mode - single index](../../../../client-api/operations/maintenance/indexes/set-index-lock.mdx#set-lock-mode---single-index)
+ * [Set lock mode - multiple indexes](../../../../client-api/operations/maintenance/indexes/set-index-lock.mdx#set-lock-mode---multiple-indexes)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/set-index-lock.mdx#syntax)
+
+
+## Lock modes
+
+* **Unlocked** - when lock mode is set to `UNLOCK`:
+ * Any change to the index definition will be applied.
+ * If the new index definition differs from the one stored on the server,
+ the index will be updated and the data will be re-indexed using the new index definition.
+ * The index can be deleted.
+
+* **Locked (ignore)** - when lock mode is set to `LOCKED_IGNORE`:
+ * Index definition changes will Not be applied.
+ * Modifying the index definition will return successfully and no error will be raised,
+ however, no change will be made to the index definition on the server.
+ * Trying to delete the index will not remove it from the server, and no error will be raised.
+
+* **Locked (error)** - when lock mode is set to `LOCKED_ERROR`:
+ * Index definitions changes will Not be applied.
+ * An exception will be thrown upon trying to modify the index.
+ * The index cannot be deleted. Attempting to do so will result in an exception.
+
+
+
+## Sample usage flow
+
+Consider the following scenario:
+
+* Your client application defines and [deploys a static-index](../../../../client-api/operations/maintenance/indexes/put-indexes.mdx) upon application startup.
+
+* After the application has started, you make a change to your index definition and re-indexing occurs.
+ However, if the index lock mode is `UNLOCK`, the next time your application will start,
+ it will reset the index definition back to the original version.
+
+* Locking the index allows to make changes to the running index and prevents the application
+ from setting it back to the previous definition upon startup. See the following steps:
+
+
+ 1. Run your application
+ 2. Modify the index definition on the server (from Studio, or from another application),
+ and then set this index lock mode to `LOCKED_IGNORE`.
+ 3. A side-by-side replacement index is created on the server.
+ It will index your dataset according to the **new** definition.
+ 4. At this point, if any instance of your original application is started,
+ the code that defines and deploys the index upon startup will have no effect
+ since the index is `LOCKED`.
+ 5. Once the replacement index is done indexing, it will replace the original index.
+
+
+
+## Set lock mode - single index
+
+
+
+{`# Define the set lock mode operation
+# Pass index name & lock mode
+set_lock_mode_op = SetIndexesLockOperation(IndexLockMode.LOCKED_IGNORE, "Orders/Totals")
+
+# Execute the operation by passing it to maintenance.send
+# An exception will be thrown if index does not exist
+store.maintenance.send(set_lock_mode_op)
+
+# Lock mode is now set to 'LockedIgnore'
+# Any modification done now to the index will Not be applied, and will Not throw
+`}
+
+
+
+
+
+## Set lock mode - multiple indexes
+
+
+
+{`# Define the set lock mode operation, pass the parameters
+set_lock_mode_op = SetIndexesLockOperation(IndexLockMode.LOCKED_ERROR, "Orders/Totals", "Orders/ByCompany")
+
+# Execute the operation by passing it to maintenance.send
+# An exception will be thrown if any of the specified indexes does not exist
+store.maintenance.send(set_lock_mode_op)
+
+# Lock mode is now set to 'LockedError' on both indexes
+# Any modifications done now to either index will throw
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`class SetIndexesLockOperation(VoidMaintenanceOperation):
+ def __init__(self, mode: IndexLockMode, *index_names: str): ...
+`}
+
+
+
+| Parameters | Type | Description |
+|- | - | - |
+| **mode** | `IndexLockMode` | Lock mode to set |
+| **\*index_names** | `str` | Index names to set lock mode for |
+
+
+
+{`class IndexLockMode(Enum):
+ UNLOCK = "Unlock"
+ LOCKED_IGNORE = "LockedIgnore"
+ LOCKED_ERROR = "LockedError"
+
+ def __str__(self):
+ return self.value
+`}
+
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_set-index-priority-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_set-index-priority-csharp.mdx
new file mode 100644
index 0000000000..fa87a5070b
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_set-index-priority-csharp.mdx
@@ -0,0 +1,156 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* In RavenDB, each index has its own dedicated thread for all indexing work.
+ By default, RavenDB prioritizes processing requests over indexing,
+ so indexing threads start with a lower priority than request-processing threads.
+
+* Use `SetIndexesPriorityOperation` to raise or lower the index thread priority.
+
+* **Indexes scope**:
+ Index priority can be set for both static and auto indexes.
+
+* **Nodes scope**:
+ The priority will be updated on all nodes in the database group.
+
+* Setting the priority can also be done from the [indexes list view](../../../../studio/database/indexes/indexes-list-view.mdx#indexes-list-view---actions) in the Studio.
+
+* In this page:
+ * [Index priority](../../../../client-api/operations/maintenance/indexes/set-index-priority.mdx#index-priority)
+ * [Set priority - single index](../../../../client-api/operations/maintenance/indexes/set-index-priority.mdx#set-priority---single-index)
+ * [Set priority - multiple indexes](../../../../client-api/operations/maintenance/indexes/set-index-priority.mdx#set-priority---multiple-indexes)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/set-index-priority.mdx#syntax)
+
+
+
+## Index priority
+
+Setting the priority will affect the indexing thread priority at the operating system level:
+
+| Priority value | Indexing thread priority at OS level | Description |
+|-----------------------|------------------------------------------|-------------|
+| **Low** | Lowest |
Having the `Lowest` priority at the OS level, indexes will run only when there's capacity, when the system is not occupied with higher-priority tasks.
Requests to the database will complete faster. Use when querying the server is more important to you than indexing.
|
+| **Normal** (default) | Below normal |
Requests to the database are still preferred over the indexing process.
The indexing thread priority at the OS level is `Below normal`, while request-processing threads have `Normal` priority.
|
+| **High** | Normal |
Requests and indexing will have the same priority at the OS level.
|
+
+## Set priority - single index
+
+
+
+
+{`// Define the set priority operation
+// Pass index name & priority
+var setPriorityOp = new SetIndexesPriorityOperation("Orders/Totals", IndexPriority.High);
+
+// Execute the operation by passing it to Maintenance.Send
+// An exception will be thrown if index does not exist
+store.Maintenance.Send(setPriorityOp);
+`}
+
+
+
+
+{`// Define the set priority operation
+// Pass index name & priority
+var setPriorityOp = new SetIndexesPriorityOperation("Orders/Totals", IndexPriority.High);
+
+// Execute the operation by passing it to Maintenance.SendAsync
+// An exception will be thrown if index does not exist
+await store.Maintenance.SendAsync(setPriorityOp);
+`}
+
+
+
+
+
+
+## Set priority - multiple indexes
+
+
+
+
+{`// Define the index list and the new priority:
+var parameters = new SetIndexesPriorityOperation.Parameters
+{
+ IndexNames = new[] {"Orders/Totals", "Orders/ByCompany"},
+ Priority = IndexPriority.Low
+};
+
+// Define the set priority operation, pass the parameters
+var setPriorityOp = new SetIndexesPriorityOperation(parameters);
+
+// Execute the operation by passing it to Maintenance.Send
+// An exception will be thrown if any of the specified indexes do not exist
+store.Maintenance.Send(setPriorityOp);
+`}
+
+
+
+
+{`// Define the index list and the new priority:
+var parameters = new SetIndexesPriorityOperation.Parameters
+{
+ IndexNames = new[] {"Orders/Totals", "Orders/ByCompany"},
+ Priority = IndexPriority.Low
+};
+
+// Define the set priority operation, pass the parameters
+var setPriorityOp = new SetIndexesPriorityOperation(parameters);
+
+// Execute the operation by passing it to Maintenance.SendAsync
+// An exception will be thrown if any of the specified indexes do not exist
+await store.Maintenance.SendAsync(setPriorityOp);
+`}
+
+
+
+
+
+
+## Syntax
+
+
+
+{`// Available overloads:
+public SetIndexesPriorityOperation(string indexName, IndexPriority priority);
+public SetIndexesPriorityOperation(Parameters parameters);
+`}
+
+
+
+| Parameters | | |
+| - | - | - |
+| **indexName** | `string` | Index name for which to change priority |
+| **priority** | `IndexingPriority` | Priority to set |
+| **parameters** | `SetIndexesPriorityOperation.Parameters` | List of indexes + Priority to set. An exception is thrown if any of the specified indexes doesn't exist. |
+
+
+
+{`public enum IndexPriority
+\{
+ Low,
+ Normal,
+ High
+\}
+`}
+
+
+
+
+
+{`public class Parameters
+\{
+ public string[] IndexNames \{ get; set; \}
+ public IndexPriority Priority \{ get; set; \}
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_set-index-priority-java.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_set-index-priority-java.mdx
new file mode 100644
index 0000000000..eccca1d57a
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_set-index-priority-java.mdx
@@ -0,0 +1,92 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+**SetIndexesPriorityOperation** allows you to change an index priority for a given index or indexes.
+
+Setting the priority will affect the indexing thread priority at the operating system level:
+
+| Priority value | Indexing thread priority at OS level | Description |
+|-----------------------|------------------------------------------|-------------|
+| **Low** | Lowest |
Having the `Lowest` priority at the OS level, indexes will run only when there's capacity, when the system is not occupied with higher-priority tasks.
Requests to the database will complete faster. Use when querying the server is more important to you than indexing.
|
+| **Normal** (default) | Below normal |
Requests to the database are still preferred over the indexing process.
The indexing thread priority at the OS level is `Below normal`, while request-processing threads have `Normal` priority.
|
+| **High** | Normal |
Requests and indexing will have the same priority at the OS level.
|
+
+## Syntax
+
+
+
+{`public SetIndexesPriorityOperation(String indexName, IndexPriority priority) \{
+public SetIndexesPriorityOperation(SetIndexesPriorityOperation.Parameters parameters)
+`}
+
+
+
+
+
+{`public enum IndexPriority \{
+ LOW,
+ NORMAL,
+ HIGH
+\}
+`}
+
+
+
+
+
+{`public static class Parameters \{
+ private String[] indexNames;
+ private IndexPriority priority;
+
+ public String[] getIndexNames() \{
+ return indexNames;
+ \}
+
+ public void setIndexNames(String[] indexNames) \{
+ this.indexNames = indexNames;
+ \}
+
+ public IndexPriority getPriority() \{
+ return priority;
+ \}
+
+ public void setPriority(IndexPriority priority) \{
+ this.priority = priority;
+ \}
+\}
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **name** | String | name of an index to change priority for |
+| **priority** | IndexingPriority | new index priority |
+| **parameters** | SetIndexesPriorityOperation.Parameters | list of indexes + new index priority |
+
+## Example I
+
+
+
+{`store.maintenance().send(
+ new SetIndexesPriorityOperation("Orders/Totals", IndexPriority.HIGH));
+`}
+
+
+
+## Example II
+
+
+
+{`SetIndexesPriorityOperation.Parameters parameters = new SetIndexesPriorityOperation.Parameters();
+parameters.setIndexNames(new String[]\{ "Orders/Totals", "Orders/ByCompany" \});
+parameters.setPriority(IndexPriority.LOW);
+
+store.maintenance().send(new SetIndexesPriorityOperation(parameters));
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_set-index-priority-nodejs.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_set-index-priority-nodejs.mdx
new file mode 100644
index 0000000000..6b39f94496
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_set-index-priority-nodejs.mdx
@@ -0,0 +1,109 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* In RavenDB, each index has its own dedicated thread for all indexing work.
+ By default, RavenDB prioritizes processing requests over indexing,
+ so indexing threads start with a lower priority than request-processing threads.
+
+* Use `SetIndexesPriorityOperation` to raise or lower the index thread priority.
+
+* **Indexes scope**:
+ Index priority can be set for both static and auto indexes.
+
+* **Nodes scope**:
+ The priority will be updated on all nodes in the database group.
+
+* Setting the priority can also be done from the [indexes list view](../../../../studio/database/indexes/indexes-list-view.mdx#indexes-list-view---actions) in the Studio.
+
+* In this page:
+ * [Index priority](../../../../client-api/operations/maintenance/indexes/set-index-priority.mdx#index-priority)
+ * [Set priority - single index](../../../../client-api/operations/maintenance/indexes/set-index-priority.mdx#set-priority---single-index)
+ * [Set priority - multiple indexes](../../../../client-api/operations/maintenance/indexes/set-index-priority.mdx#set-priority---multiple-indexes)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/set-index-priority.mdx#syntax)
+
+
+
+## Index priority
+
+Setting the priority will affect the indexing thread priority at the operating system level:
+
+| Priority value | Indexing thread priority at OS level | Description |
+|-----------------------|------------------------------------------|-------------|
+| **Low** | Lowest |
Having the `Lowest` priority at the OS level, indexes will run only when there's capacity, when the system is not occupied with higher-priority tasks.
Requests to the database will complete faster. Use when querying the server is more important to you than indexing.
|
+| **Normal** (default) | Below normal |
Requests to the database are still preferred over the indexing process.
The indexing thread priority at the OS level is `Below normal`, while request-processing threads have `Normal` priority.
|
+| **High** | Normal |
Requests and indexing will have the same priority at the OS level.
|
+
+## Set priority - single index
+
+
+
+{`// Define the set priority operation
+// Pass index name & priority
+const setPriorityOp = new SetIndexesPriorityOperation("Orders/Totals", "High");
+
+// Execute the operation by passing it to maintenance.send
+// An exception will be thrown if index does not exist
+await store.maintenance.send(setPriorityOp);
+`}
+
+
+
+
+
+## Set priority - multiple indexes
+
+
+
+{`// Define the index list and the new priority:
+const parameters = \{
+ indexNames: ["Orders/Totals", "Orders/ByCompany"],
+ priority: "Low"
+\}
+
+// Define the set priority operation, pass the parameters
+const setPriorityOp = new SetIndexesPriorityOperation(parameters);
+
+// Execute the operation by passing it to maintenance.send
+// An exception will be thrown if any of the specified indexes do not exist
+await store.maintenance.send(setPriorityOp);
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`// Available overloads:
+const setPriorityOp = new SetIndexesPriorityOperation(indexName, priority);
+const setPriorityOp = new SetIndexesPriorityOperation(parameters);
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **indexName** | `string` | Index name for which to change priority |
+| **priority** | `"Low"` / `"Normal"` / `"High"` | Priority to set |
+| **parameters** | parameters object | List of indexes + Priority to set. An exception is thrown if any of the specified indexes doesn't exist. |
+
+
+
+{`// parameters object
+\{
+ indexNames, // string[], list of index names
+ priority // Priority to set
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_set-index-priority-php.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_set-index-priority-php.mdx
new file mode 100644
index 0000000000..b1db277387
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_set-index-priority-php.mdx
@@ -0,0 +1,113 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* In RavenDB, each index has its own dedicated thread for all indexing work.
+ By default, RavenDB prioritizes processing requests over indexing,
+ so indexing threads start with a lower priority than request-processing threads.
+
+* Use `SetIndexesPriorityOperation` to raise or lower the index thread priority.
+
+* **Indexes scope**:
+ Index priority can be set for both static and auto indexes.
+
+* **Nodes scope**:
+ The priority will be updated on all nodes in the database group.
+
+* Setting the priority can also be done from the [indexes list view](../../../../studio/database/indexes/indexes-list-view.mdx#indexes-list-view---actions) in the Studio.
+
+* In this page:
+ * [Index priority](../../../../client-api/operations/maintenance/indexes/set-index-priority.mdx#index-priority)
+ * [Set priority - single index](../../../../client-api/operations/maintenance/indexes/set-index-priority.mdx#set-priority---single-index)
+ * [Set priority - multiple indexes](../../../../client-api/operations/maintenance/indexes/set-index-priority.mdx#set-priority---multiple-indexes)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/set-index-priority.mdx#syntax)
+
+
+
+## Index priority
+
+Setting the priority will affect the indexing thread priority at the operating system level:
+
+| Priority value | Indexing thread priority at OS level | Description |
+|--------------------------------|------------------------------------------|-------------|
+| Set using `low()` | Lowest |
Having the `Lowest` priority at the OS level, indexes will run only when there's capacity, when the system is not occupied with higher-priority tasks.
Requests to the database will complete faster. Use when querying the server is more important to you than indexing.
|
+| Set using `normal()` (default) | Below normal |
Requests to the database are still preferred over the indexing process.
The indexing thread priority at the OS level is `Below normal`, while request-processing threads have `Normal` priority.
|
+| Set using `high()` | Normal |
Requests and indexing will have the same priority at the OS level.
|
+
+## Set priority - single index
+
+
+
+{`// Define the set priority operation
+// Pass index name & priority
+$setPriorityOp = new SetIndexesPriorityOperation("Orders/Totals", IndexPriority::high());
+
+// Execute the operation by passing it to Maintenance.Send
+// An exception will be thrown if index does not exist
+$store->maintenance()->send($setPriorityOp);
+`}
+
+
+
+
+
+## Set priority - multiple indexes
+
+
+
+{`// Define the index list and the new priority:
+$parameters = new IndexPriorityParameters();
+$parameters->setIndexNames(["Orders/Totals", "Orders/ByCompany"]);
+$parameters->setPriority(IndexPriority::low());
+
+// Define the set priority operation, pass the parameters
+$setPriorityOp = new SetIndexesPriorityOperation($parameters);
+
+// Execute the operation by passing it to Maintenance.Send
+// An exception will be thrown if any of the specified indexes do not exist
+$store->maintenance()->send($setPriorityOp);
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`// Available overloads:
+SetIndexesPriorityOperation(?string $indexName, ?IndexPriority $priority);
+SetIndexesPriorityOperation(?Parameters $parameters);
+`}
+
+
+
+| Parameters | | |
+| - | - | - |
+| **$indexName** | `?string` | Index name for which to change priority |
+| **$priority** | `?IndexPriority` | Priority to set |
+| **$parameters** | `?Parameters` | Index priority parameters |
+
+
+
+{`class IndexPriority
+\{
+ public static function low(): IndexPriority;
+ public static function normal(): IndexPriority;
+ public static function high(): IndexPriority;
+
+ public function isLow(): bool;
+ public function isNormal(): bool;
+ public function isHigh(): bool;
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_set-index-priority-python.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_set-index-priority-python.mdx
new file mode 100644
index 0000000000..0feff233fe
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_set-index-priority-python.mdx
@@ -0,0 +1,100 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* In RavenDB, each index has its own dedicated thread for all indexing work.
+ By default, RavenDB prioritizes processing requests over indexing,
+ so indexing threads start with a lower priority than request-processing threads.
+
+* Use `SetIndexesPriorityOperation` to raise or lower the index thread priority.
+
+* **Indexes scope**:
+ Index priority can be set for both static and auto indexes.
+
+* **Nodes scope**:
+ The priority will be updated on all nodes in the database group.
+
+* Setting the priority can also be done from the [indexes list view](../../../../studio/database/indexes/indexes-list-view.mdx#indexes-list-view---actions) in the Studio.
+
+* In this page:
+ * [Index priority](../../../../client-api/operations/maintenance/indexes/set-index-priority.mdx#index-priority)
+ * [Set priority - single index](../../../../client-api/operations/maintenance/indexes/set-index-priority.mdx#set-priority---single-index)
+ * [Set priority - multiple indexes](../../../../client-api/operations/maintenance/indexes/set-index-priority.mdx#set-priority---multiple-indexes)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/set-index-priority.mdx#syntax)
+
+
+
+## Index priority
+
+Setting the priority will affect the indexing thread priority at the operating system level:
+
+| Priority value | Indexing thread priority at OS level | Description |
+|-----------------------|------------------------------------------|-------------|
+| **LOW** | Lowest |
Having the `Lowest` priority at the OS level, indexes will run only when there's capacity, when the system is not occupied with higher-priority tasks.
Requests to the database will complete faster. Use when querying the server is more important to you than indexing.
|
+| **NORMAL** (default) | Below normal |
Requests to the database are still preferred over the indexing process.
The indexing thread priority at the OS level is `Below normal`, while request-processing threads have `Normal` priority.
|
+| **HIGH** | Normal |
Requests and indexing will have the same priority at the OS level.
|
+
+## Set priority - single index
+
+
+
+{`# Define the set priority operation
+# Pass index name & priority
+set_priority_op = SetIndexesPriorityOperation(IndexPriority.HIGH, "Orders/Totals")
+
+# Execute the operation by passing it to maintenance.send
+# An exception will be thrown if index does not exist
+store.maintenance.send(set_priority_op)
+`}
+
+
+
+
+
+## Set priority - multiple indexes
+
+
+
+{`# Define the set priority operation, pass multiple index names
+set_priority_op = SetIndexesPriorityOperation(IndexPriority.LOW, "Orders/Totals", "Orders/ByCompany")
+
+# Execute the operation by passing it to maintenance.send
+# An exception will be thrown if any of the specified indexes do not exist
+store.maintenance.send(set_priority_op)
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`class SetIndexesPriorityOperation(VoidMaintenanceOperation):
+ def __init__(self, priority: IndexPriority, *index_names: str): ...
+`}
+
+
+
+| Parameters | | |
+| - | - | - |
+| **\*index_names** | `str` | Index name for which to change priority |
+| **priority** | `IndexingPriority` | Priority to set |
+
+
+
+{`class IndexPriority(Enum):
+ LOW = "Low"
+ NORMAL = "Normal"
+ HIGH = "High"
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_start-index-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_start-index-csharp.mdx
new file mode 100644
index 0000000000..6f367e2f64
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_start-index-csharp.mdx
@@ -0,0 +1,83 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* After an index has been paused using [StopIndexOperation](../../../../client-api/operations/maintenance/indexes/stop-index.mdx),
+ use `StartIndexOperation` to **resume the index**.
+
+* When resuming the index from the **client**:
+ The index is resumed on the preferred node only, and Not on all the database-group nodes.
+
+* When resuming the index from the **Studio** [indexes list](../../../../studio/database/indexes/indexes-list-view.mdx#indexes-list-view---actions) view:
+ The index is resumed on the local node the browser is opened on, even if it is Not the preferred node.
+
+* In this page:
+ * [Resume index example](../../../../client-api/operations/maintenance/indexes/start-index.mdx#resume-index-example)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/start-index.mdx#syntax)
+
+
+## Resume index example
+
+
+
+
+{`// Define the resume index operation, pass the index name
+var resumeIndexOp = new StartIndexOperation("Orders/Totals");
+
+// Execute the operation by passing it to Maintenance.Send
+store.Maintenance.Send(resumeIndexOp);
+
+// At this point:
+// Index 'Orders/Totals' is resumed on the preferred node
+
+// Can verify the index status on the preferred node by sending GetIndexingStatusOperation
+var indexingStatus = store.Maintenance.Send(new GetIndexingStatusOperation());
+
+var index = indexingStatus.Indexes.FirstOrDefault(x => x.Name == "Orders/Totals");
+Assert.Equal(IndexRunningStatus.Running, index.Status);
+`}
+
+
+
+
+{`// Define the resume index operation, pass the index name
+var resumeIndexOp = new StartIndexOperation("Orders/Totals");
+
+// Execute the operation by passing it to Maintenance.SendAsync
+await store.Maintenance.SendAsync(resumeIndexOp);
+
+// At this point:
+// Index 'Orders/Totals' is resumed on the preferred node
+
+// Can verify the index status on the preferred node by sending GetIndexingStatusOperation
+var indexingStatus = await store.Maintenance.SendAsync(new GetIndexingStatusOperation());
+
+var index = indexingStatus.Indexes.FirstOrDefault(x => x.Name == "Orders/Totals");
+Assert.Equal(IndexRunningStatus.Running, index.Status);
+`}
+
+
+
+
+
+
+## Syntax
+
+
+
+{`// class name has "Start", but this is ok, this is the "Resume" operation
+public StartIndexOperation(string indexName)
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - |-|
+| **indexName** | `string` | Name of an index to resume |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_start-index-java.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_start-index-java.mdx
new file mode 100644
index 0000000000..204f3c5ba3
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_start-index-java.mdx
@@ -0,0 +1,30 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+The **StartIndexOperation** is used to resume indexing for an index.
+
+### Syntax
+
+
+
+{`public StartIndexOperation(String indexName)
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **indexName** | String | name of an index to start indexing |
+
+### Example
+
+
+
+{`store.maintenance().send(new StartIndexOperation("Orders/Totals"));
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_start-index-nodejs.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_start-index-nodejs.mdx
new file mode 100644
index 0000000000..a33cab9563
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_start-index-nodejs.mdx
@@ -0,0 +1,62 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* After an index has been paused using [StopIndexOperation](../../../../client-api/operations/maintenance/indexes/stop-index.mdx),
+ use `StartIndexOperation` to **resume the index**.
+
+* When resuming the index from the **client**:
+ The index is resumed on the preferred node only, and Not on all the database-group nodes.
+
+* When resuming the index from the **Studio** [indexes list](../../../../studio/database/indexes/indexes-list-view.mdx#indexes-list-view---actions) view:
+ The index is resumed on the local node the browser is opened on, even if it is Not the preferred node.
+
+* In this page:
+ * [Resume index example](../../../../client-api/operations/maintenance/indexes/start-index.mdx#resume-index-example)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/start-index.mdx#syntax)
+
+
+## Resume index example
+
+
+
+{`// Define the resume index operation, pass the index name
+const resumeIndexOp = new StartIndexOperation("Orders/Totals");
+
+// Execute the operation by passing it to maintenance.send
+await store.maintenance.send(resumeIndexOp);
+
+// At this point:
+// Index 'Orders/Totals' is resumed on the preferred node
+
+// Can verify the index status on the preferred node by sending GetIndexingStatusOperation
+const indexingStatus = await store.maintenance.send(new GetIndexingStatusOperation());
+
+const index = indexingStatus.indexes.find(x => x.name === "Orders/Totals")
+assert.strictEqual(index.status, "Running");
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`// class name has "Start", but this is ok, this is the "Resume" operation
+const resumeIndexOp = new StartIndexOperation(indexName);
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - |-|
+| **indexName** | `string` | Name of an index to resume |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_start-index-php.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_start-index-php.mdx
new file mode 100644
index 0000000000..7940c9d520
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_start-index-php.mdx
@@ -0,0 +1,68 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* After an index has been paused using [StopIndexOperation](../../../../client-api/operations/maintenance/indexes/stop-index.mdx),
+ use `StartIndexOperation` to **resume the index**.
+
+* When resuming the index from the **client**:
+ The index is resumed on the preferred node only, and Not on all the database-group nodes.
+
+* When resuming the index from the **Studio** [indexes list](../../../../studio/database/indexes/indexes-list-view.mdx#indexes-list-view---actions) view:
+ The index is resumed on the local node the browser is opened on, even if it is Not the preferred node.
+
+* In this page:
+ * [Resume index example](../../../../client-api/operations/maintenance/indexes/start-index.mdx#resume-index-example)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/start-index.mdx#syntax)
+
+
+## Resume index example
+
+
+
+{`// Define the resume index operation, pass the index name
+$resumeIndexOp = new StartIndexOperation("Orders/Totals");
+
+// Execute the operation by passing it to Maintenance.Send
+$store->maintenance()->send($resumeIndexOp);
+
+// At this point:
+// Index 'Orders/Totals' is resumed on the preferred node
+
+// Can verify the index status on the preferred node by sending GetIndexingStatusOperation
+/** @var IndexingStatus $indexingStatus */
+$indexingStatus = $store->maintenance()->send(new GetIndexingStatusOperation());
+
+$indexes = array_filter($indexingStatus->getIndexes()->getArrayCopy(), function ($v, $k) \{
+ return $v->getName() == "Orders/Totals";
+\});
+/** @var IndexingStatus $index */
+$index = $indexes[0];
+
+$this->assertTrue($index->getStatus()->isRunning());
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`// class name begins with "Start" but this is still the "Resume" operation
+StartIndexOperation(?string $indexName)
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - |-|
+| **$indexName** | `?string` | Name of an index to resume |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_start-index-python.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_start-index-python.mdx
new file mode 100644
index 0000000000..7d9bda9ee6
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_start-index-python.mdx
@@ -0,0 +1,62 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* After an index has been paused using [StopIndexOperation](../../../../client-api/operations/maintenance/indexes/stop-index.mdx),
+ use `StartIndexOperation` to **resume the index**.
+
+* When resuming the index from the **client**:
+ The index is resumed on the preferred node only, and Not on all the database-group nodes.
+
+* When resuming the index from the **Studio** [indexes list](../../../../studio/database/indexes/indexes-list-view.mdx#indexes-list-view---actions) view:
+ The index is resumed on the local node the browser is opened on, even if it is Not the preferred node.
+
+* In this page:
+ * [Resume index example](../../../../client-api/operations/maintenance/indexes/start-index.mdx#resume-index-example)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/start-index.mdx#syntax)
+
+
+## Resume index example
+
+
+
+{`# Define the resume index operation, pass the index name
+resume_index_op = StartIndexOperation("Orders/Totals")
+
+# Execute the operation by passing it to maintenance.send
+store.maintenance.send(resume_index_op)
+
+# At this point:
+# Index 'Orders/Totals' is resumed on the preferred node
+
+# Can verify the index status on the preferred node by sending GetIndexingStatusOperation
+indexing_status = store.maintenance.send(GetIndexingStatusOperation())
+
+index = [x for x in indexing_status.indexes if x.name == "Orders/Totals"][0]
+self.assertEqual(index.status, IndexRunningStatus.RUNNING)
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`class StartIndexOperation(VoidMaintenanceOperation):
+ def __init__(self, index_name: str): ...
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - |-|
+| **index_name** | `str` | Name of an index to resume |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_start-indexing-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_start-indexing-csharp.mdx
new file mode 100644
index 0000000000..d51c8dd10b
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_start-indexing-csharp.mdx
@@ -0,0 +1,78 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* After indexing has been paused using [StopIndexingOperation](../../../../client-api/operations/maintenance/indexes/stop-indexing.mdx),
+ use `StartIndexingOperation` to **resume indexing** for ALL indexes in the database.
+
+ Calling `StartIndexingOperation` on a single index will have no effect.
+
+
+* When resuming indexing from the **client**:
+ Indexing is resumed on the [preferred node](../../../../client-api/configuration/load-balance/overview.mdx#the-preferred-node) only, and Not on all the database-group nodes.
+
+* When resuming indexing from the **Studio** [database list](../../../../studio/database/databases-list-view.mdx#more-actions) view:
+ Indexing is resumed on the local node the browser is opened on, even if it is Not the preferred node.
+
+* In this page:
+ * [Resume indexing example](../../../../client-api/operations/maintenance/indexes/start-indexing.mdx#resume-indexing-example)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/start-indexing.mdx#syntax)
+
+
+## Resume indexing example
+
+
+
+
+{`// Define the resume indexing operation
+var resumeIndexingOp = new StartIndexingOperation();
+
+// Execute the operation by passing it to Maintenance.Send
+store.Maintenance.Send(resumeIndexingOp);
+
+// At this point,
+// you can be sure that all indexes on the preferred node are 'running'
+
+// Can verify indexing status on the preferred node by sending GetIndexingStatusOperation
+var indexingStatus = store.Maintenance.Send(new GetIndexingStatusOperation());
+Assert.Equal(IndexRunningStatus.Running, indexingStatus.Status);
+`}
+
+
+
+
+{`// Define the resume indexing operation
+var resumeIndexingOp = new StartIndexingOperation();
+
+// Execute the operation by passing it to Maintenance.SendAsync
+await store.Maintenance.SendAsync(resumeIndexingOp);
+
+// At this point,
+// you can be sure that all indexes on the preferred node are 'running'
+
+// Can verify indexing status on the preferred node by sending GetIndexingStatusOperation
+var indexingStatus = await store.Maintenance.SendAsync(new GetIndexingStatusOperation());
+Assert.Equal(IndexRunningStatus.Running, indexingStatus.Status);
+`}
+
+
+
+
+
+
+## Syntax
+
+
+
+{`// class name has "Start", but this is ok, this is the "Resume" operation
+public StartIndexingOperation()
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_start-indexing-java.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_start-indexing-java.mdx
new file mode 100644
index 0000000000..04c6bfce47
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_start-indexing-java.mdx
@@ -0,0 +1,26 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+**StartIndexingOperation** is used to resume indexing for entire database.
+
+### Syntax
+
+
+
+{`public StartIndexingOperation()
+`}
+
+
+
+### Example
+
+
+
+{`store.maintenance().send(new StartIndexingOperation());
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_start-indexing-nodejs.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_start-indexing-nodejs.mdx
new file mode 100644
index 0000000000..e233e9dfe2
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_start-indexing-nodejs.mdx
@@ -0,0 +1,59 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* After indexing has been paused using [StopIndexingOperation](../../../../client-api/operations/maintenance/indexes/stop-indexing.mdx),
+ use `StartIndexingOperation` to **resume indexing** for ALL indexes in the database.
+
+ Calling `StartIndexingOperation` on a single index will have no effect.
+
+
+* When resuming indexing from the **client**:
+ Indexing is resumed on the [preferred node](../../../../client-api/configuration/load-balance/overview.mdx#the-preferred-node) only, and Not on all the database-group nodes.
+
+* When resuming indexing from the **Studio** [database list](../../../../studio/database/databases-list-view.mdx#more-actions) view:
+ Indexing is resumed on the local node the browser is opened on, even if it is Not the preferred node.
+
+* In this page:
+ * [Resume indexing example](../../../../client-api/operations/maintenance/indexes/start-indexing.mdx#resume-indexing-example)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/start-indexing.mdx#syntax)
+
+
+## Resume indexing example
+
+
+
+{`// Define the resume indexing operation
+const resumeIndexingOp = new StartIndexingOperation();
+
+// Execute the operation by passing it to maintenance.send
+await store.maintenance.send(resumeIndexingOp);
+
+// At this point,
+// you can be sure that all indexes on the preferred node are 'running'
+
+// Can verify indexing status on the preferred node by sending GetIndexingStatusOperation
+const indexingStatus = await store.maintenance.send(new GetIndexingStatusOperation());
+assert.strictEqual(indexingStatus.status, "Running");
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`// class name has "Start", but this is ok, this is the "Resume" operation
+const resumeIndexingOp = new StartIndexingOperation();
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_start-indexing-php.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_start-indexing-php.mdx
new file mode 100644
index 0000000000..49c8cc4328
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_start-indexing-php.mdx
@@ -0,0 +1,60 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* After indexing has been paused using [StopIndexingOperation](../../../../client-api/operations/maintenance/indexes/stop-indexing.mdx),
+ use `StartIndexingOperation` to **resume indexing** for ALL indexes in the database.
+
+ Calling `StartIndexingOperation` on a single index will have no effect.
+
+
+* When resuming indexing from the **client**:
+ Indexing is resumed on the [preferred node](../../../../client-api/configuration/load-balance/overview.mdx#the-preferred-node) only, and Not on all the database-group nodes.
+
+* When resuming indexing from the **Studio** [database list](../../../../studio/database/databases-list-view.mdx#more-actions) view:
+ Indexing is resumed on the local node the browser is opened on, even if it is Not the preferred node.
+
+* In this page:
+ * [Resume indexing example](../../../../client-api/operations/maintenance/indexes/start-indexing.mdx#resume-indexing-example)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/start-indexing.mdx#syntax)
+
+
+## Resume indexing example
+
+
+
+{`// Define the resume indexing operation
+$resumeIndexingOp = new StartIndexingOperation();
+
+// Execute the operation by passing it to Maintenance.Send
+$store->maintenance()->send($resumeIndexingOp);
+
+// At this point,
+// you can be sure that all indexes on the preferred node are 'running'
+
+// Can verify indexing status on the preferred node by sending GetIndexingStatusOperation
+/** @var IndexingStatus $indexingStatus */
+$indexingStatus = $store->maintenance()->send(new GetIndexingStatusOperation());
+$this->assertTrue($indexingStatus->getStatus()->isPaused());
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`// class name prefix is "Start", but this is still the "Resume" operation
+public StartIndexingOperation()
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_start-indexing-python.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_start-indexing-python.mdx
new file mode 100644
index 0000000000..6ce957652e
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_start-indexing-python.mdx
@@ -0,0 +1,60 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* After indexing has been paused using [StopIndexingOperation](../../../../client-api/operations/maintenance/indexes/stop-indexing.mdx),
+ use `StartIndexingOperation` to **resume indexing** for ALL indexes in the database.
+
+ Calling `StartIndexingOperation` on a single index will have no effect.
+
+
+* When resuming indexing from the **client**:
+ Indexing is resumed on the [preferred node](../../../../client-api/configuration/load-balance/overview.mdx#the-preferred-node) only, and Not on all the database-group nodes.
+
+* When resuming indexing from the **Studio** [database list](../../../../studio/database/databases-list-view.mdx#more-actions) view:
+ Indexing is resumed on the local node the browser is opened on, even if it is Not the preferred node.
+
+* In this page:
+ * [Resume indexing example](../../../../client-api/operations/maintenance/indexes/start-indexing.mdx#resume-indexing-example)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/start-indexing.mdx#syntax)
+
+
+## Resume indexing example
+
+
+
+{`# Define the resume indexing operation
+resume_index_op = StartIndexingOperation()
+
+# Execute the operation by passing it to maintenance.send
+store.maintenance.send(resume_index_op)
+
+# At this point:
+# you can be sure that all indexes on the preferred node are 'running'
+
+# Can verify the index status on the preferred node by sending GetIndexingStatusOperation
+indexing_status = store.maintenance.send(GetIndexingStatusOperation())
+
+self.assertEqual(indexing_status.status, IndexRunningStatus.RUNNING)
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`class StartIndexingOperation(VoidMaintenanceOperation):
+ def __init__(self): ...
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_stop-index-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_stop-index-csharp.mdx
new file mode 100644
index 0000000000..c7123beac9
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_stop-index-csharp.mdx
@@ -0,0 +1,116 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `StopIndexOperation` to **pause a single index** in the database.
+
+* To pause indexing for ALL indexes in the database use [StopIndexingOperation](../../../../client-api/operations/maintenance/indexes/stop-indexing.mdx).
+
+* In this page:
+ * [Overview](../../../../client-api/operations/maintenance/indexes/stop-index.mdx#overview)
+ * [Pause index example](../../../../client-api/operations/maintenance/indexes/stop-index.mdx#pause-index-example)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/stop-index.mdx#syntax)
+
+
+## Overview
+
+#### Which node is the index paused for?
+
+* When pausing the index from the **client**:
+ The index will be paused for the [preferred node](../../../../client-api/configuration/load-balance/overview.mdx#the-preferred-node) only,
+ Not for all database-group nodes.
+
+* When pausing the index from the **Studio** [indexes list](../../../../studio/database/indexes/indexes-list-view.mdx#indexes-list-view---actions) view:
+ The index will be paused for the local node the browser is opened on, even if it is Not the preferred node.
+#### What happens when an index is paused for a node?
+
+* A paused index performs no indexing for the node it is paused for.
+ New data **is** indexed by the index on database-group nodes that the index is not paused for.
+
+* A paused index **can** be queried, but results may be stale when querying the node that the index is paused for.
+#### Resuming the index:
+
+* Learn how to resume an index by a client here: [Resume index](../../../../client-api/operations/maintenance/indexes/start-index.mdx)
+
+* Learn to resume an index from **Studio** here: [Indexes list view](../../../../studio/database/indexes/indexes-list-view.mdx#indexes-list-view---actions)
+
+* Pausing the index is **Not a persistent operation**.
+ This means the paused index will resume upon either of the following:
+ * The server is restarted.
+ * The database is re-loaded (by disabling and then enabling it).
+ Toggling the database state can be done using the **Studio** [database list](../../../../studio/database/databases-list-view.mdx#database-actions) view,
+ or using [ToggleDatabasesStateOperation](../../../../client-api/operations/server-wide/toggle-databases-state.mdx) by the client.
+
+* [Resetting](../../../../client-api/operations/maintenance/indexes/reset-index.mdx) a paused index will resume the normal operation of the index
+ on the local node where the reset action was performed.
+
+* Modifying the index definition will resume the normal operation of the index
+ on all the nodes for which it is paused.
+
+
+
+## Pause index example
+
+
+
+
+{`// Define the pause index operation, pass the index name
+var pauseIndexOp = new StopIndexOperation("Orders/Totals");
+
+// Execute the operation by passing it to Maintenance.Send
+store.Maintenance.Send(pauseIndexOp);
+
+// At this point:
+// Index 'Orders/Totals' is paused on the preferred node
+
+// Can verify the index status on the preferred node by sending GetIndexingStatusOperation
+var indexingStatus = store.Maintenance.Send(new GetIndexingStatusOperation());
+
+var index = indexingStatus.Indexes.FirstOrDefault(x => x.Name == "Orders/Totals");
+Assert.Equal(IndexRunningStatus.Paused, index.Status);
+`}
+
+
+
+
+{`// Define the pause index operation, pass the index name
+var pauseIndexOp = new StopIndexOperation("Orders/Totals");
+
+// Execute the operation by passing it to Maintenance.SendAsync
+await store.Maintenance.SendAsync(pauseIndexOp);
+
+// At this point:
+// Index 'Orders/Totals' is paused on the preferred node
+
+// Can verify the index status on the preferred node by sending GetIndexingStatusOperation
+var indexingStatus = await store.Maintenance.SendAsync(new GetIndexingStatusOperation());
+
+var index = indexingStatus.Indexes.FirstOrDefault(x => x.Name == "Orders/Totals");
+Assert.Equal(IndexRunningStatus.Paused, index.Status);
+`}
+
+
+
+
+
+
+## Syntax
+
+
+
+{`// class name has "Stop", but this is ok, this is the "Pause" operation
+public StopIndexOperation(string indexName)
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **indexName** | string | Name of an index to pause |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_stop-index-java.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_stop-index-java.mdx
new file mode 100644
index 0000000000..e5c947a2e5
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_stop-index-java.mdx
@@ -0,0 +1,34 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+The **StopIndexOperation** is used to pause indexing of a single index.
+
+
+Indexing will be resumed automatically after server restart.
+
+
+### Syntax
+
+
+
+{`public StopIndexOperation(String indexName)
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **indexName** | String | name of an index to stop indexing |
+
+### Example
+
+
+
+{`store.maintenance().send(new StopIndexOperation("Orders/Totals"));
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_stop-index-nodejs.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_stop-index-nodejs.mdx
new file mode 100644
index 0000000000..623f6b9f7b
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_stop-index-nodejs.mdx
@@ -0,0 +1,95 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `StopIndexOperation` to **pause a single index** in the database.
+
+* To pause indexing for ALL indexes in the database use [StopIndexingOperation](../../../../client-api/operations/maintenance/indexes/stop-indexing.mdx).
+
+* In this page:
+ * [Overview](../../../../client-api/operations/maintenance/indexes/stop-index.mdx#overview)
+ * [Pause index example](../../../../client-api/operations/maintenance/indexes/stop-index.mdx#pause-index-example)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/stop-index.mdx#syntax)
+
+
+## Overview
+
+#### Which node is the index paused for?
+
+* When pausing the index from the **client**:
+ The index will be paused for the [preferred node](../../../../client-api/configuration/load-balance/overview.mdx#the-preferred-node) only,
+ Not for all database-group nodes.
+
+* When pausing the index from the **Studio** [indexes list](../../../../studio/database/indexes/indexes-list-view.mdx#indexes-list-view---actions) view:
+ The index will be paused for the local node the browser is opened on, even if it is Not the preferred node.
+#### What happens when an index is paused for a node?
+
+* A paused index performs no indexing for the node it is paused for.
+ New data **is** indexed by the index on database-group nodes that the index is not paused for.
+
+* A paused index **can** be queried, but results may be stale when querying the node that the index is paused for.
+#### Resuming the index:
+
+* Learn how to resume an index by a client here: [Resume index](../../../../client-api/operations/maintenance/indexes/start-index.mdx)
+
+* Learn to resume an index from **Studio** here: [Indexes list view](../../../../studio/database/indexes/indexes-list-view.mdx#indexes-list-view---actions)
+
+* Pausing the index is **Not a persistent operation**.
+ This means the paused index will resume upon either of the following:
+ * The server is restarted.
+ * The database is re-loaded (by disabling and then enabling it).
+ Toggling the database state can be done using the **Studio** [database list](../../../../studio/database/databases-list-view.mdx#database-actions) view,
+ or using [ToggleDatabasesStateOperation](../../../../client-api/operations/server-wide/toggle-databases-state.mdx) by the client.
+
+* [Resetting](../../../../client-api/operations/maintenance/indexes/reset-index.mdx) a paused index will resume the normal operation of the index
+ on the local node where the reset action was performed.
+
+* Modifying the index definition will resume the normal operation of the index
+ on all the nodes for which it is paused.
+
+
+
+## Pause index example
+
+
+
+{`// Define the pause index operation, pass the index name
+const pauseIndexOp = new StopIndexOperation("Orders/Totals");
+
+// Execute the operation by passing it to maintenance.send
+await store.maintenance.send(pauseIndexOp);
+
+// At this point:
+// Index 'Orders/Totals' is paused on the preferred node
+
+// Can verify the index status on the preferred node by sending GetIndexingStatusOperation
+const indexingStatus = await store.maintenance.send(new GetIndexingStatusOperation());
+
+const index = indexingStatus.indexes.find(x => x.name === "Orders/Totals")
+assert.strictEqual(index.status, "Paused");
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`// class name has "Stop", but this is ok, this is the "Pause" operation
+const pauseIndexOp = new StopIndexOperation(indexName);
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **indexName** | string | Name of an index to pause |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_stop-index-php.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_stop-index-php.mdx
new file mode 100644
index 0000000000..7d4b7ece13
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_stop-index-php.mdx
@@ -0,0 +1,101 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `StopIndexOperation` to **pause a single index** in the database.
+
+* To pause indexing for ALL indexes in the database use [StopIndexingOperation](../../../../client-api/operations/maintenance/indexes/stop-indexing.mdx).
+
+* In this page:
+ * [Overview](../../../../client-api/operations/maintenance/indexes/stop-index.mdx#overview)
+ * [Pause index example](../../../../client-api/operations/maintenance/indexes/stop-index.mdx#pause-index-example)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/stop-index.mdx#syntax)
+
+
+## Overview
+
+#### Which node is the index paused for?
+
+* When pausing the index from the **client**:
+ The index will be paused for the [preferred node](../../../../client-api/configuration/load-balance/overview.mdx#the-preferred-node) only,
+ Not for all database-group nodes.
+
+* When pausing the index from the **Studio** [indexes list](../../../../studio/database/indexes/indexes-list-view.mdx#indexes-list-view---actions) view:
+ The index will be paused for the local node the browser is opened on, even if it is Not the preferred node.
+#### What happens when an index is paused for a node?
+
+* A paused index performs no indexing for the node it is paused for.
+ New data **is** indexed by the index on database-group nodes that the index is not paused for.
+
+* A paused index **can** be queried, but results may be stale when querying the node that the index is paused for.
+#### Resuming the index:
+
+* Learn how to resume an index by a client here: [Resume index](../../../../client-api/operations/maintenance/indexes/start-index.mdx)
+
+* Learn to resume an index from **Studio** here: [Indexes list view](../../../../studio/database/indexes/indexes-list-view.mdx#indexes-list-view---actions)
+
+* Pausing the index is **Not a persistent operation**.
+ This means the paused index will resume upon either of the following:
+ * The server is restarted.
+ * The database is re-loaded (by disabling and then enabling it).
+ Toggling the database state can be done using the **Studio** [database list](../../../../studio/database/databases-list-view.mdx#database-actions) view,
+ or using [ToggleDatabasesStateOperation](../../../../client-api/operations/server-wide/toggle-databases-state.mdx) by the client.
+
+* [Resetting](../../../../client-api/operations/maintenance/indexes/reset-index.mdx) a paused index will resume the normal operation of the index
+ on the local node where the reset action was performed.
+
+* Modifying the index definition will resume the normal operation of the index
+ on all the nodes for which it is paused.
+
+
+
+## Pause index example
+
+
+
+{`// Define the pause index operation, pass the index name
+$pauseIndexOp = new StopIndexOperation("Orders/Totals");
+
+// Execute the operation by passing it to Maintenance.Send
+$store->maintenance()->send($pauseIndexOp);
+
+// At this point:
+// Index 'Orders/Totals' is paused on the preferred node
+
+// Can verify the index status on the preferred node by sending GetIndexingStatusOperation
+/** @var IndexingStatus $indexingStatus */
+$indexingStatus = $store->maintenance()->send(new GetIndexingStatusOperation());
+
+$indexes = array_filter($indexingStatus->getIndexes()->getArrayCopy(), function ($v, $k) \{
+ return $v->getName() == "Orders/Totals";
+\});
+/** @var IndexingStatus $index */
+$index = $indexes[0];
+
+$this->assertTrue($index->getStatus()->isRunning());
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`// class name has "Stop", but this is ok, this is the "Pause" operation
+public StopIndexOperation(?string $indexName)
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **$indexName** | `?string` | Name of an index to pause |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_stop-index-python.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_stop-index-python.mdx
new file mode 100644
index 0000000000..16a56339e8
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_stop-index-python.mdx
@@ -0,0 +1,96 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `StopIndexOperation` to **pause a single index** in the database.
+
+* To pause indexing for ALL indexes in the database use [StopIndexingOperation](../../../../client-api/operations/maintenance/indexes/stop-indexing.mdx).
+
+* In this page:
+ * [Overview](../../../../client-api/operations/maintenance/indexes/stop-index.mdx#overview)
+ * [Pause index example](../../../../client-api/operations/maintenance/indexes/stop-index.mdx#pause-index-example)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/stop-index.mdx#syntax)
+
+
+## Overview
+
+#### Which node is the index paused for?
+
+* When pausing the index from the **client**:
+ The index will be paused for the [preferred node](../../../../client-api/configuration/load-balance/overview.mdx#the-preferred-node) only,
+ Not for all database-group nodes.
+
+* When pausing the index from the **Studio** [indexes list](../../../../studio/database/indexes/indexes-list-view.mdx#indexes-list-view---actions) view:
+ The index will be paused for the local node the browser is opened on, even if it is Not the preferred node.
+#### What happens when an index is paused for a node?
+
+* A paused index performs no indexing for the node it is paused for.
+ New data **is** indexed by the index on database-group nodes that the index is not paused for.
+
+* A paused index **can** be queried, but results may be stale when querying the node that the index is paused for.
+#### Resuming the index:
+
+* Learn how to resume an index by a client here: [Resume index](../../../../client-api/operations/maintenance/indexes/start-index.mdx)
+
+* Learn to resume an index from **Studio** here: [Indexes list view](../../../../studio/database/indexes/indexes-list-view.mdx#indexes-list-view---actions)
+
+* Pausing the index is **Not a persistent operation**.
+ This means the paused index will resume upon either of the following:
+ * The server is restarted.
+ * The database is re-loaded (by disabling and then enabling it).
+ Toggling the database state can be done using the **Studio** [database list](../../../../studio/database/databases-list-view.mdx#database-actions) view,
+ or using [ToggleDatabasesStateOperation](../../../../client-api/operations/server-wide/toggle-databases-state.mdx) by the client.
+
+* [Resetting](../../../../client-api/operations/maintenance/indexes/reset-index.mdx) a paused index will resume the normal operation of the index
+ on the local node where the reset action was performed.
+
+* Modifying the index definition will resume the normal operation of the index
+ on all the nodes for which it is paused.
+
+
+
+## Pause index example
+
+
+
+{`# Define the resume index operation, pass the index name
+pause_index_op = StopIndexOperation("Orders/Totals")
+
+# Execute the operation by passing it to maintenance.send
+store.maintenance.send(pause_index_op)
+
+# At this point:
+# Index 'Orders/Totals' is paused on the preferred node
+
+# Can verify the index status on the preferred node by sending GetIndexingStatusOperation
+indexing_status = store.maintenance.send(GetIndexingStatusOperation())
+
+index = [x for x in indexing_status.indexes if x.name == "Orders/Totals"][0]
+self.assertEqual(index.status, IndexRunningStatus.PAUSED)
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`class StopIndexOperation(VoidMaintenanceOperation):
+ # class name has "Stop", but this is ok, this is the "Pause" operation
+ def __init__(self, index_name: str): ...
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **index_name** | `str` | Name of an index to pause |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_stop-indexing-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_stop-indexing-csharp.mdx
new file mode 100644
index 0000000000..faed0ce2cc
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_stop-indexing-csharp.mdx
@@ -0,0 +1,110 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `StopIndexingOperation` to **pause indexing** for ALL indexes in the database.
+
+* To pause only a specific index use the [StopIndexOperation](../../../../client-api/operations/maintenance/indexes/stop-index.mdx).
+
+* In this page:
+ * [Overview](../../../../client-api/operations/maintenance/indexes/stop-indexing.mdx#overview)
+ * [Pause indexing example](../../../../client-api/operations/maintenance/indexes/stop-indexing.mdx#pause-indexing-example)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/stop-indexing.mdx#syntax)
+
+
+## Overview
+
+#### Which node is indexing paused for?
+
+* When pausing indexing from the **client**:
+ Indexing will be paused on the [preferred node](../../../../client-api/configuration/load-balance/overview.mdx#the-preferred-node) only, and Not on all the database-group nodes.
+
+* When pausing indexing from the **Studio** [database list](../../../../studio/database/databases-list-view.mdx#more-actions) view:
+ Indexing will be paused on the local node the browser is opened on, even if it is Not the preferred node.
+#### What happens when indexing is paused for a node?
+
+* No indexing takes place on a node that indexing is paused for.
+ New data **is** indexed on database-group nodes that indexing is not paused for.
+
+* All indexes, including paused ones, can be queried,
+ but results may be stale when querying nodes that indexing has been paused for.
+
+* New indexes **can** be created for the database.
+ However, the new indexes will also be paused on any node that indexing is paused for,
+ until indexing is resumed for that node.
+
+* When [resetting](../../../../client-api/operations/maintenance/indexes/reset-index.mdx) indexes
+ or editing index definitions, re-indexing on a node that indexing has been paused for will
+ only be triggered when indexing is resumed on that node.
+#### Resuming indexing:
+
+* Learn to resume indexing for all indexes by a client, here: [resume indexing](../../../../client-api/operations/maintenance/indexes/start-indexing.mdx)
+
+* Learn to resume indexing for all indexes via **Studio**, here: [database list view](../../../../studio/database/databases-list-view.mdx#more-actions)
+
+* Pausing indexing is **Not a persistent operation**.
+ This means that all paused indexes will resume upon either of the following:
+ * The server is restarted.
+ * The database is re-loaded (by disabling and then enabling it).
+ Toggling the database state can be done using the **Studio** [database list](../../../../studio/database/databases-list-view.mdx#database-actions) view,
+ or using [ToggleDatabasesStateOperation](../../../../client-api/operations/server-wide/toggle-databases-state.mdx) by the client.
+
+
+
+## Pause indexing example
+
+
+
+
+{`// Define the pause indexing operation
+var pauseIndexingOp = new StopIndexingOperation();
+
+// Execute the operation by passing it to Maintenance.Send
+store.Maintenance.Send(pauseIndexingOp);
+
+// At this point:
+// All indexes in the default database will be 'paused' on the preferred node
+
+// Can verify indexing status on the preferred node by sending GetIndexingStatusOperation
+var indexingStatus = store.Maintenance.Send(new GetIndexingStatusOperation());
+Assert.Equal(IndexRunningStatus.Paused, indexingStatus.Status);
+`}
+
+
+
+
+{`// Define the pause indexing operation
+var pauseIndexingOp = new StopIndexingOperation();
+
+// Execute the operation by passing it to Maintenance.SendAsync
+await store.Maintenance.SendAsync(pauseIndexingOp);
+
+// At this point:
+// All indexes in the default database will be 'paused' on the preferred node
+
+// Can verify indexing status on the preferred node by sending GetIndexingStatusOperation
+var indexingStatus = await store.Maintenance.SendAsync(new GetIndexingStatusOperation());
+Assert.Equal(IndexRunningStatus.Paused, indexingStatus.Status);
+`}
+
+
+
+
+
+
+## Syntax
+
+
+
+{`// class name has "Stop", but this is ok, this is the "Pause" operation
+public StopIndexingOperation()
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_stop-indexing-java.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_stop-indexing-java.mdx
new file mode 100644
index 0000000000..54afe97f9b
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_stop-indexing-java.mdx
@@ -0,0 +1,32 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+**StopIndexingOperation** is used to pause indexing for the entire database.
+
+Use [StopIndexOperation](../../../../client-api/operations/maintenance/indexes/stop-index.mdx) to stop single index.
+
+
+Indexing will be resumed automatically after a server restart or after using [start indexing operation](../../../../client-api/operations/maintenance/indexes/start-indexing.mdx).
+
+
+### Syntax
+
+
+
+{`public StopIndexingOperation()
+`}
+
+
+
+### Example
+
+
+
+{`store.maintenance().send(new StopIndexingOperation());
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_stop-indexing-nodejs.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_stop-indexing-nodejs.mdx
new file mode 100644
index 0000000000..790e2ac915
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_stop-indexing-nodejs.mdx
@@ -0,0 +1,91 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `StopIndexingOperation` to **pause indexing** for ALL indexes in the database.
+
+* To pause only a specific index use the [StopIndexOperation](../../../../client-api/operations/maintenance/indexes/stop-index.mdx).
+
+* In this page:
+ * [Overview](../../../../client-api/operations/maintenance/indexes/stop-indexing.mdx#overview)
+ * [Pause indexing example](../../../../client-api/operations/maintenance/indexes/stop-indexing.mdx#pause-indexing-example)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/stop-indexing.mdx#syntax)
+
+
+## Overview
+
+#### Which node is indexing paused for?
+
+* When pausing indexing from the **client**:
+ Indexing will be paused on the [preferred node](../../../../client-api/configuration/load-balance/overview.mdx#the-preferred-node) only, and Not on all the database-group nodes.
+
+* When pausing indexing from the **Studio** [database list](../../../../studio/database/databases-list-view.mdx#more-actions) view:
+ Indexing will be paused on the local node the browser is opened on, even if it is Not the preferred node.
+#### What happens when indexing is paused for a node?
+
+* No indexing takes place on a node that indexing is paused for.
+ New data **is** indexed on database-group nodes that indexing is not paused for.
+
+* All indexes, including paused ones, can be queried,
+ but results may be stale when querying nodes that indexing has been paused for.
+
+* New indexes **can** be created for the database.
+ However, the new indexes will also be paused on any node that indexing is paused for,
+ until indexing is resumed for that node.
+
+* When [resetting](../../../../client-api/operations/maintenance/indexes/reset-index.mdx) indexes
+ or editing index definitions, re-indexing on a node that indexing has been paused for will
+ only be triggered when indexing is resumed on that node.
+#### Resuming indexing:
+
+* Learn to resume indexing for all indexes by a client, here: [resume indexing](../../../../client-api/operations/maintenance/indexes/start-indexing.mdx)
+
+* Learn to resume indexing for all indexes via **Studio**, here: [database list view](../../../../studio/database/databases-list-view.mdx#more-actions)
+
+* Pausing indexing is **Not a persistent operation**.
+ This means that all paused indexes will resume upon either of the following:
+ * The server is restarted.
+ * The database is re-loaded (by disabling and then enabling it).
+ Toggling the database state can be done using the **Studio** [database list](../../../../studio/database/databases-list-view.mdx#database-actions) view,
+ or using [ToggleDatabasesStateOperation](../../../../client-api/operations/server-wide/toggle-databases-state.mdx) by the client.
+
+
+
+## Pause indexing example
+
+
+
+{`// Define the pause indexing operation
+const pauseIndexingOp = new StopIndexingOperation();
+
+// Execute the operation by passing it to maintenance.send
+await store.maintenance.send(pauseIndexingOp);
+
+// At this point:
+// All indexes in the default database will be 'paused' on the preferred node
+
+// Can verify indexing status on the preferred node by sending GetIndexingStatusOperation
+const indexingStatus = await store.maintenance.send(new GetIndexingStatusOperation());
+assert.strictEqual(indexingStatus.status, "Paused");
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`// class name has "Stop", but this is ok, this is the "Pause" operation
+const pauseIndexingOp = new StopIndexingOperation();
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_stop-indexing-php.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_stop-indexing-php.mdx
new file mode 100644
index 0000000000..c9a0e93d81
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_stop-indexing-php.mdx
@@ -0,0 +1,92 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `StopIndexingOperation` to **pause indexing** for ALL indexes in the database.
+
+* To pause only a specific index use the [StopIndexOperation](../../../../client-api/operations/maintenance/indexes/stop-index.mdx).
+
+* In this page:
+ * [Overview](../../../../client-api/operations/maintenance/indexes/stop-indexing.mdx#overview)
+ * [Pause indexing example](../../../../client-api/operations/maintenance/indexes/stop-indexing.mdx#pause-indexing-example)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/stop-indexing.mdx#syntax)
+
+
+## Overview
+
+#### Which node is indexing paused for?
+
+* When pausing indexing from the **client**:
+ Indexing will be paused on the [preferred node](../../../../client-api/configuration/load-balance/overview.mdx#the-preferred-node) only, and Not on all the database-group nodes.
+
+* When pausing indexing from the **Studio** [database list](../../../../studio/database/databases-list-view.mdx#more-actions) view:
+ Indexing will be paused on the local node the browser is opened on, even if it is Not the preferred node.
+#### What happens when indexing is paused for a node?
+
+* No indexing takes place on a node that indexing is paused for.
+ New data **is** indexed on database-group nodes that indexing is not paused for.
+
+* All indexes, including paused ones, can be queried,
+ but results may be stale when querying nodes that indexing has been paused for.
+
+* New indexes **can** be created for the database.
+ However, the new indexes will also be paused on any node that indexing is paused for,
+ until indexing is resumed for that node.
+
+* When [resetting](../../../../client-api/operations/maintenance/indexes/reset-index.mdx) indexes
+ or editing index definitions, re-indexing on a node that indexing has been paused for will
+ only be triggered when indexing is resumed on that node.
+#### Resuming indexing:
+
+* Learn to resume indexing for all indexes by a client, here: [resume indexing](../../../../client-api/operations/maintenance/indexes/start-indexing.mdx)
+
+* Learn to resume indexing for all indexes via **Studio**, here: [database list view](../../../../studio/database/databases-list-view.mdx#more-actions)
+
+* Pausing indexing is **Not a persistent operation**.
+ This means that all paused indexes will resume upon either of the following:
+ * The server is restarted.
+ * The database is re-loaded (by disabling and then enabling it).
+ Toggling the database state can be done using the **Studio** [database list](../../../../studio/database/databases-list-view.mdx#database-actions) view,
+ or using [ToggleDatabasesStateOperation](../../../../client-api/operations/server-wide/toggle-databases-state.mdx) by the client.
+
+
+
+## Pause indexing example
+
+
+
+{`// Define the pause indexing operation
+$pauseIndexingOp = new StopIndexingOperation();
+
+// Execute the operation by passing it to Maintenance.Send
+$store->maintenance()->send($pauseIndexingOp);
+
+// At this point:
+// All indexes in the default database will be 'paused' on the preferred node
+
+// Can verify indexing status on the preferred node by sending GetIndexingStatusOperation
+/** @var IndexingStatus $indexingStatus */
+$indexingStatus = $store->maintenance()->send(new GetIndexingStatusOperation());
+$this->assertTrue($indexingStatus->getStatus()->isPaused());
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`// class name begins with "Stop" but this is still the "Pause" operation
+StopIndexingOperation()
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_stop-indexing-python.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_stop-indexing-python.mdx
new file mode 100644
index 0000000000..04a8f83777
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/_stop-indexing-python.mdx
@@ -0,0 +1,92 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `StopIndexingOperation` to **pause indexing** for ALL indexes in the database.
+
+* To pause only a specific index use the [StopIndexOperation](../../../../client-api/operations/maintenance/indexes/stop-index.mdx).
+
+* In this page:
+ * [Overview](../../../../client-api/operations/maintenance/indexes/stop-indexing.mdx#overview)
+ * [Pause indexing example](../../../../client-api/operations/maintenance/indexes/stop-indexing.mdx#pause-indexing-example)
+ * [Syntax](../../../../client-api/operations/maintenance/indexes/stop-indexing.mdx#syntax)
+
+
+## Overview
+
+#### Which node is indexing paused for?
+
+* When pausing indexing from the **client**:
+ Indexing will be paused on the [preferred node](../../../../client-api/configuration/load-balance/overview.mdx#the-preferred-node) only, and Not on all the database-group nodes.
+
+* When pausing indexing from the **Studio** [database list](../../../../studio/database/databases-list-view.mdx#more-actions) view:
+ Indexing will be paused on the local node the browser is opened on, even if it is Not the preferred node.
+#### What happens when indexing is paused for a node?
+
+* No indexing takes place on a node that indexing is paused for.
+ New data **is** indexed on database-group nodes that indexing is not paused for.
+
+* All indexes, including paused ones, can be queried,
+ but results may be stale when querying nodes that indexing has been paused for.
+
+* New indexes **can** be created for the database.
+ However, the new indexes will also be paused on any node that indexing is paused for,
+ until indexing is resumed for that node.
+
+* When [resetting](../../../../client-api/operations/maintenance/indexes/reset-index.mdx) indexes
+ or editing index definitions, re-indexing on a node that indexing has been paused for will
+ only be triggered when indexing is resumed on that node.
+#### Resuming indexing:
+
+* Learn to resume indexing for all indexes by a client, here: [resume indexing](../../../../client-api/operations/maintenance/indexes/start-indexing.mdx)
+
+* Learn to resume indexing for all indexes via **Studio**, here: [database list view](../../../../studio/database/databases-list-view.mdx#more-actions)
+
+* Pausing indexing is **Not a persistent operation**.
+ This means that all paused indexes will resume upon either of the following:
+ * The server is restarted.
+ * The database is re-loaded (by disabling and then enabling it).
+ Toggling the database state can be done using the **Studio** [database list](../../../../studio/database/databases-list-view.mdx#database-actions) view,
+ or using [ToggleDatabasesStateOperation](../../../../client-api/operations/server-wide/toggle-databases-state.mdx) by the client.
+
+
+
+## Pause indexing example
+
+
+
+{`# Define the pause indexing operation
+pause_indexing_op = StopIndexingOperation()
+
+# Execute the operation by passing it to maintenance.send
+store.maintenance.send(pause_indexing_op)
+
+# At this point:
+# All indexes in the default database will be 'paused' on the preferred node
+
+# Can verify indexing status on the preferred node by sending GetIndexingStatusOperation
+indexing_status = store.maintenance.send(GetIndexingStatusOperation())
+self.assertEqual(indexing_status.status, IndexRunningStatus.PAUSED)
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`# class name has "Stop", but this is ok, this is the "Pause" operation
+class StopIndexingOperation(VoidMaintenanceOperation):
+ def __init__(self): ...
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/delete-index-errors.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/delete-index-errors.mdx
new file mode 100644
index 0000000000..286dfdb070
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/delete-index-errors.mdx
@@ -0,0 +1,49 @@
+---
+title: "Delete Index Errors Operation"
+hide_table_of_contents: true
+sidebar_label: Delete Index Errors
+sidebar_position: 5
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import DeleteIndexErrorsCsharp from './_delete-index-errors-csharp.mdx';
+import DeleteIndexErrorsPython from './_delete-index-errors-python.mdx';
+import DeleteIndexErrorsPhp from './_delete-index-errors-php.mdx';
+import DeleteIndexErrorsNodejs from './_delete-index-errors-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "python", "php", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/delete-index.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/delete-index.mdx
new file mode 100644
index 0000000000..71cacc4daf
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/delete-index.mdx
@@ -0,0 +1,53 @@
+---
+title: "Delete Index Operation"
+hide_table_of_contents: true
+sidebar_label: Delete Index
+sidebar_position: 4
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import DeleteIndexCsharp from './_delete-index-csharp.mdx';
+import DeleteIndexJava from './_delete-index-java.mdx';
+import DeleteIndexPython from './_delete-index-python.mdx';
+import DeleteIndexPhp from './_delete-index-php.mdx';
+import DeleteIndexNodejs from './_delete-index-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "python", "php", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/disable-index.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/disable-index.mdx
new file mode 100644
index 0000000000..546efeb1e3
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/disable-index.mdx
@@ -0,0 +1,55 @@
+---
+title: "Disable Index"
+hide_table_of_contents: true
+sidebar_label: Disable Index
+sidebar_position: 6
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import DisableIndexCsharp from './_disable-index-csharp.mdx';
+import DisableIndexJava from './_disable-index-java.mdx';
+import DisableIndexPython from './_disable-index-python.mdx';
+import DisableIndexPhp from './_disable-index-php.mdx';
+import DisableIndexNodejs from './_disable-index-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "python", "php", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/enable-index.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/enable-index.mdx
new file mode 100644
index 0000000000..48c6ace76e
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/enable-index.mdx
@@ -0,0 +1,55 @@
+---
+title: "Enable Index Operation"
+hide_table_of_contents: true
+sidebar_label: Enable Index
+sidebar_position: 7
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import EnableIndexCsharp from './_enable-index-csharp.mdx';
+import EnableIndexJava from './_enable-index-java.mdx';
+import EnableIndexPython from './_enable-index-python.mdx';
+import EnableIndexPhp from './_enable-index-php.mdx';
+import EnableIndexNodejs from './_enable-index-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "python", "php", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/get-index-errors.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/get-index-errors.mdx
new file mode 100644
index 0000000000..b85ca61d59
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/get-index-errors.mdx
@@ -0,0 +1,54 @@
+---
+title: "Get Index Errors Operation"
+hide_table_of_contents: true
+sidebar_label: Get Index Errors
+sidebar_position: 14
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import GetIndexErrorsCsharp from './_get-index-errors-csharp.mdx';
+import GetIndexErrorsJava from './_get-index-errors-java.mdx';
+import GetIndexErrorsPython from './_get-index-errors-python.mdx';
+import GetIndexErrorsPhp from './_get-index-errors-php.mdx';
+import GetIndexErrorsNodejs from './_get-index-errors-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "python", "php", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/get-index-names.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/get-index-names.mdx
new file mode 100644
index 0000000000..c729d93ffd
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/get-index-names.mdx
@@ -0,0 +1,54 @@
+---
+title: "Get Index Names Operation"
+hide_table_of_contents: true
+sidebar_label: Get Index Names
+sidebar_position: 15
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import GetIndexNamesCsharp from './_get-index-names-csharp.mdx';
+import GetIndexNamesJava from './_get-index-names-java.mdx';
+import GetIndexNamesPython from './_get-index-names-python.mdx';
+import GetIndexNamesPhp from './_get-index-names-php.mdx';
+import GetIndexNamesNodejs from './_get-index-names-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "python", "php", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/get-index.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/get-index.mdx
new file mode 100644
index 0000000000..265f20a539
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/get-index.mdx
@@ -0,0 +1,54 @@
+---
+title: "Get Index Operation"
+hide_table_of_contents: true
+sidebar_label: Get Index
+sidebar_position: 12
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import GetIndexCsharp from './_get-index-csharp.mdx';
+import GetIndexJava from './_get-index-java.mdx';
+import GetIndexPython from './_get-index-python.mdx';
+import GetIndexPhp from './_get-index-php.mdx';
+import GetIndexNodejs from './_get-index-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "python", "php", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/get-indexes.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/get-indexes.mdx
new file mode 100644
index 0000000000..75c7ba547d
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/get-indexes.mdx
@@ -0,0 +1,55 @@
+---
+title: "Get Indexes Operation"
+hide_table_of_contents: true
+sidebar_label: Get Indexes
+sidebar_position: 13
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import GetIndexesCsharp from './_get-indexes-csharp.mdx';
+import GetIndexesJava from './_get-indexes-java.mdx';
+import GetIndexesPython from './_get-indexes-python.mdx';
+import GetIndexesPhp from './_get-indexes-php.mdx';
+import GetIndexesNodejs from './_get-indexes-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "python", "php", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/get-terms.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/get-terms.mdx
new file mode 100644
index 0000000000..e9e2bcf4f3
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/get-terms.mdx
@@ -0,0 +1,49 @@
+---
+title: "Get Index Terms Operation"
+hide_table_of_contents: true
+sidebar_label: Get Index Terms
+sidebar_position: 16
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import GetTermsCsharp from './_get-terms-csharp.mdx';
+import GetTermsJava from './_get-terms-java.mdx';
+import GetTermsPython from './_get-terms-python.mdx';
+import GetTermsPhp from './_get-terms-php.mdx';
+import GetTermsNodejs from './_get-terms-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "python", "php", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/index-has-changed.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/index-has-changed.mdx
new file mode 100644
index 0000000000..1c9dfc2c04
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/index-has-changed.mdx
@@ -0,0 +1,54 @@
+---
+title: "Index has Changed Operation"
+hide_table_of_contents: true
+sidebar_label: Index Has Changed
+sidebar_position: 17
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import IndexHasChangedCsharp from './_index-has-changed-csharp.mdx';
+import IndexHasChangedJava from './_index-has-changed-java.mdx';
+import IndexHasChangedPython from './_index-has-changed-python.mdx';
+import IndexHasChangedPhp from './_index-has-changed-php.mdx';
+import IndexHasChangedNodejs from './_index-has-changed-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "python", "php", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/put-indexes.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/put-indexes.mdx
new file mode 100644
index 0000000000..8a8c0cbe17
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/put-indexes.mdx
@@ -0,0 +1,55 @@
+---
+title: "Put Indexes Operation"
+hide_table_of_contents: true
+sidebar_label: Put Indexes
+sidebar_position: 0
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import PutIndexesCsharp from './_put-indexes-csharp.mdx';
+import PutIndexesJava from './_put-indexes-java.mdx';
+import PutIndexesPython from './_put-indexes-python.mdx';
+import PutIndexesPhp from './_put-indexes-php.mdx';
+import PutIndexesNodejs from './_put-indexes-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "python", "php", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/reset-index.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/reset-index.mdx
new file mode 100644
index 0000000000..09f323f319
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/reset-index.mdx
@@ -0,0 +1,56 @@
+---
+title: "Reset Index Operation"
+hide_table_of_contents: true
+sidebar_label: Reset Index
+sidebar_position: 3
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import ResetIndexCsharp from './_reset-index-csharp.mdx';
+import ResetIndexJava from './_reset-index-java.mdx';
+import ResetIndexPython from './_reset-index-python.mdx';
+import ResetIndexPhp from './_reset-index-php.mdx';
+import ResetIndexNodejs from './_reset-index-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "python", "php", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/set-index-lock.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/set-index-lock.mdx
new file mode 100644
index 0000000000..fe2e234f2e
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/set-index-lock.mdx
@@ -0,0 +1,54 @@
+---
+title: "Set Index Lock Mode Operation"
+hide_table_of_contents: true
+sidebar_label: Set Index Lock Mode
+sidebar_position: 1
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import SetIndexLockCsharp from './_set-index-lock-csharp.mdx';
+import SetIndexLockJava from './_set-index-lock-java.mdx';
+import SetIndexLockPython from './_set-index-lock-python.mdx';
+import SetIndexLockPhp from './_set-index-lock-php.mdx';
+import SetIndexLockNodejs from './_set-index-lock-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "python", "php", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/set-index-priority.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/set-index-priority.mdx
new file mode 100644
index 0000000000..fbcabc1cc4
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/set-index-priority.mdx
@@ -0,0 +1,54 @@
+---
+title: "Set Index Priority Operation"
+hide_table_of_contents: true
+sidebar_label: Set Index Priority
+sidebar_position: 2
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import SetIndexPriorityCsharp from './_set-index-priority-csharp.mdx';
+import SetIndexPriorityJava from './_set-index-priority-java.mdx';
+import SetIndexPriorityPython from './_set-index-priority-python.mdx';
+import SetIndexPriorityPhp from './_set-index-priority-php.mdx';
+import SetIndexPriorityNodejs from './_set-index-priority-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "python", "php", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/start-index.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/start-index.mdx
new file mode 100644
index 0000000000..7512ec5671
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/start-index.mdx
@@ -0,0 +1,56 @@
+---
+title: "Resume Index Operation"
+hide_table_of_contents: true
+sidebar_label: Resume Index
+sidebar_position: 10
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import StartIndexCsharp from './_start-index-csharp.mdx';
+import StartIndexJava from './_start-index-java.mdx';
+import StartIndexPython from './_start-index-python.mdx';
+import StartIndexPhp from './_start-index-php.mdx';
+import StartIndexNodejs from './_start-index-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "python", "php", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/start-indexing.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/start-indexing.mdx
new file mode 100644
index 0000000000..0bcbcbb8a0
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/start-indexing.mdx
@@ -0,0 +1,57 @@
+---
+title: "Resume Indexing Operation"
+hide_table_of_contents: true
+sidebar_label: Resume Indexing
+sidebar_position: 11
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import StartIndexingCsharp from './_start-indexing-csharp.mdx';
+import StartIndexingJava from './_start-indexing-java.mdx';
+import StartIndexingPython from './_start-indexing-python.mdx';
+import StartIndexingPhp from './_start-indexing-php.mdx';
+import StartIndexingNodejs from './_start-indexing-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "python", "php", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/stop-index.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/stop-index.mdx
new file mode 100644
index 0000000000..f7461b6435
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/stop-index.mdx
@@ -0,0 +1,60 @@
+---
+title: "Pause Index Operation"
+hide_table_of_contents: true
+sidebar_label: Pause Index
+sidebar_position: 8
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import StopIndexCsharp from './_stop-index-csharp.mdx';
+import StopIndexJava from './_stop-index-java.mdx';
+import StopIndexPython from './_stop-index-python.mdx';
+import StopIndexPhp from './_stop-index-php.mdx';
+import StopIndexNodejs from './_stop-index-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "python", "php", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/stop-indexing.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/stop-indexing.mdx
new file mode 100644
index 0000000000..805e9319b3
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/indexes/stop-indexing.mdx
@@ -0,0 +1,58 @@
+---
+title: "Pause Indexing Operation"
+hide_table_of_contents: true
+sidebar_label: Pause Indexing
+sidebar_position: 9
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import StopIndexingCsharp from './_stop-indexing-csharp.mdx';
+import StopIndexingJava from './_stop-indexing-java.mdx';
+import StopIndexingPython from './_stop-indexing-python.mdx';
+import StopIndexingPhp from './_stop-indexing-php.mdx';
+import StopIndexingNodejs from './_stop-indexing-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "python", "php", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/ongoing-tasks/_category_.json b/versioned_docs/version-7.1/client-api/operations/maintenance/ongoing-tasks/_category_.json
new file mode 100644
index 0000000000..25fbf693c1
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/ongoing-tasks/_category_.json
@@ -0,0 +1,4 @@
+{
+ "position": 5,
+ "label": Ongoing Tasks,
+}
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/ongoing-tasks/_ongoing-task-operations-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/ongoing-tasks/_ongoing-task-operations-csharp.mdx
new file mode 100644
index 0000000000..8197a44643
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/ongoing-tasks/_ongoing-task-operations-csharp.mdx
@@ -0,0 +1,213 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Once an ongoing task is created, it can be managed via the Client API [Operations](../../../../client-api/operations/what-are-operations.mdx).
+ You can get task info, toggle the task state (enable, disable), or delete the task.
+
+* Ongoing tasks can also be managed via the [Tasks list view](../../../../studio/database/tasks/ongoing-tasks/general-info.mdx#ongoing-tasks---view) in the Studio.
+
+* In this page:
+ * [Get ongoing task info](../../../../client-api/operations/maintenance/ongoing-tasks/ongoing-task-operations.mdx#get-ongoing-task-info)
+ * [Delete ongoing task](../../../../client-api/operations/maintenance/ongoing-tasks/ongoing-task-operations.mdx#delete-ongoing-task)
+ * [Toggle ongoing task state](../../../../client-api/operations/maintenance/ongoing-tasks/ongoing-task-operations.mdx#toggle-ongoing-task-state)
+ * [Syntax](../../../../client-api/operations/maintenance/ongoing-tasks/ongoing-task-operations.mdx#syntax)
+
+
+
+## Get ongoing task info
+
+For the examples in this article, let's create a simple external replication ongoing task:
+
+
+
+{`// Define a simple External Replication task
+var taskDefintion = new ExternalReplication
+\{
+ Name = "MyExtRepTask",
+ ConnectionStringName = "MyConnectionStringName"
+\};
+
+// Deploy the task to the server
+var taskOp = new UpdateExternalReplicationOperation(taskDefintion);
+var sendResult = store.Maintenance.Send(taskOp);
+
+// The task ID is available in the send result
+var taskId = sendResult.TaskId;
+`}
+
+
+Use `GetOngoingTaskInfoOperation` to get information about an ongoing task.
+
+
+
+
+{`// Define the get task operation, pass:
+// * The ongoing task ID or the task name
+// * The task type
+var getTaskInfoOp = new GetOngoingTaskInfoOperation(taskId, OngoingTaskType.Replication);
+
+// Execute the operation by passing it to Maintenance.Send
+var taskInfo = (OngoingTaskReplication)store.Maintenance.Send(getTaskInfoOp);
+
+// Access the task info
+var taskState = taskInfo.TaskState;
+var taskDelayTime = taskInfo.DelayReplicationFor;
+var destinationUrls= taskInfo.TopologyDiscoveryUrls;
+// ...
+`}
+
+
+
+
+{`var getTaskInfoOp = new GetOngoingTaskInfoOperation(taskId, OngoingTaskType.Replication);
+var taskInfo = (OngoingTaskReplication) await store.Maintenance.SendAsync(getTaskInfoOp);
+
+var taskState = taskInfo.TaskState;
+var taskDelayTime = taskInfo.DelayReplicationFor;
+var destinationUrls= taskInfo.TopologyDiscoveryUrls;
+// ...
+`}
+
+
+
+
+
+
+## Delete ongoing task
+
+Use `DeleteOngoingTaskOperation` to remove an ongoing task from the list of tasks assigned to the database.
+
+
+
+
+{`// Define the delete task operation, pass:
+// * The ongoing task ID
+// * The task type
+var deleteTaskOp = new DeleteOngoingTaskOperation(taskId, OngoingTaskType.Replication);
+
+// Execute the operation by passing it to Maintenance.Send
+store.Maintenance.Send(deleteTaskOp);
+`}
+
+
+
+
+{`var deleteTaskOp = new DeleteOngoingTaskOperation(taskId, OngoingTaskType.Replication);
+await store.Maintenance.SendAsync(deleteTaskOp);
+`}
+
+
+
+
+
+
+## Toggle ongoing task state
+
+Use `ToggleOngoingTaskStateOperation` to enable/disable the task state.
+
+
+
+
+{`// Define the delete task operation, pass:
+// * The ongoing task ID
+// * The task type
+// * A boolean value to enable/disable
+var toggleTaskOp = new ToggleOngoingTaskStateOperation(taskId, OngoingTaskType.Replication, true);
+
+// Execute the operation by passing it to Maintenance.Send
+store.Maintenance.Send(toggleTaskOp);
+`}
+
+
+
+
+{`var toggleTaskOp = new ToggleOngoingTaskStateOperation(taskId, OngoingTaskType.Replication, true);
+await store.Maintenance.SendAsync(toggleTaskOp);
+`}
+
+
+
+
+
+
+## Syntax
+
+
+
+{`// Get
+public GetOngoingTaskInfoOperation(long taskId, OngoingTaskType type);
+public GetOngoingTaskInfoOperation(string taskName, OngoingTaskType type);
+`}
+
+
+
+
+
+{`// Delete
+public DeleteOngoingTaskOperation(long taskId, OngoingTaskType taskType);
+`}
+
+
+
+
+
+{`// Toggle
+public ToggleOngoingTaskStateOperation(long taskId, OngoingTaskType type, bool disable);
+`}
+
+
+
+| Parameter | Type | Description |
+|--------------|-------------------|--------------------------------------------------------|
+| **taskId** | `long` | Task ID |
+| **taskName** | `string` | Task name |
+| **taskType** | `OngoingTaskType` | Task type |
+| **disable** | `bool` | `true` - disable the task `false` - enable the task |
+
+
+
+{`private enum OngoingTaskType
+\{
+ Replication,
+ RavenEtl,
+ SqlEtl,
+ OlapEtl,
+ ElasticSearchEtl,
+ QueueEtl,
+ Backup,
+ Subscription,
+ PullReplicationAsHub,
+ PullReplicationAsSink,
+ QueueSink,
+\}
+`}
+
+
+
+
+
+| Return value of `store.Maintenance.Send(GetOngoingTaskInfoOperation)` | |
+|-------------------------------------------------------------------------|----------------------------------------|
+| `OngoingTaskReplication` | Object with information about the task |
+
+
+
+{`public sealed class OngoingTaskReplication : OngoingTask
+\{
+ public OngoingTaskReplication() => this.TaskType = OngoingTaskType.Replication;
+ public string DestinationUrl \{ get; set; \}
+ public string[] TopologyDiscoveryUrls \{ get; set; \}
+ public string DestinationDatabase \{ get; set; \}
+ public string ConnectionStringName \{ get; set; \}
+ public TimeSpan DelayReplicationFor \{ get; set; \}
+\}
+`}
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/ongoing-tasks/ongoing-task-operations.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/ongoing-tasks/ongoing-task-operations.mdx
new file mode 100644
index 0000000000..17ad1464e1
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/ongoing-tasks/ongoing-task-operations.mdx
@@ -0,0 +1,24 @@
+---
+title: "Ongoing Task Operations"
+hide_table_of_contents: true
+sidebar_label: Ongoing Task Operations
+sidebar_position: 0
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import OngoingTaskOperationsCsharp from './_ongoing-task-operations-csharp.mdx';
+
+export const supportedLanguages = ["csharp"];
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/sorters/_put-sorter-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/sorters/_put-sorter-csharp.mdx
new file mode 100644
index 0000000000..35b4e1a8d2
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/sorters/_put-sorter-csharp.mdx
@@ -0,0 +1,115 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* The Lucene indexing engine allows you to create your own __Custom Sorters__
+ where you can define how query results will be ordered based on your specific requirements.
+
+* Use `PutSortersOperation` to deploy a custom sorter to the RavenDB server.
+ Once deployed, you can use it to sort query results for all queries made on the __database__
+ that is scoped to your [Document Store](../../../../client-api/setting-up-default-database.mdx).
+
+* To deploy a custom sorter that will apply cluster-wide, to all databases, see [put server-wide sorter](../../../../client-api/operations/server-wide/sorters/put-sorter-server-wide.mdx).
+
+* A custom sorter can also be uploaded to the server from the [Studio](../../../../studio/database/settings/custom-sorters.mdx).
+
+* In this page:
+ * [Put custom sorter](../../../../client-api/operations/maintenance/sorters/put-sorter.mdx#put-custom-sorter)
+ * [Syntax](../../../../client-api/operations/maintenance/sorters/put-sorter.mdx#syntax)
+
+
+## Put custom sorter
+
+* First, create your own sorter class that inherits from the Lucene class [Lucene.Net.Search.FieldComparator](https://lucenenet.apache.org/docs/3.0.3/df/d91/class_lucene_1_1_net_1_1_search_1_1_field_comparator.html).
+
+* Then, send the custom sorter to the server using the `PutSortersOperation`.
+
+
+
+
+{`// Assign the code of your custom sorter as a \`string\`
+string mySorterCode = "";
+
+// Create the \`SorterDefinition\` object
+var customSorterDefinition = new SorterDefinition
+{
+ // The sorter Name must be the same as the sorter's class name in your code
+ Name = "MySorter",
+ // The Code must be compilable and include all necessary using statements
+ Code = mySorterCode
+};
+
+// Define the put sorters operation, pass the sorter definition
+// Note: multiple sorters can be passed, see syntax below
+var putSortersOp = new PutSortersOperation(customSorterDefinition);
+
+// Execute the operation by passing it to Maintenance.Send
+store.Maintenance.Send(putSortersOp);
+`}
+
+
+
+
+{`// Assign the code of your custom sorter as a \`string\`
+string mySorterCode = "";
+
+// Create the \`SorterDefinition\` object
+var customSorterDefinition = new SorterDefinition
+{
+ // The sorter Name must be the same as the sorter's class name in your code
+ Name = "MySorter",
+ // The Code must be compilable and include all necessary using statements
+ Code = mySorterCode
+};
+
+// Define the put sorters operation, pass the sorter definition
+// Note: multiple sorters can be passed, see syntax below
+var putSortersOp = new PutSortersOperation(customSorterDefinition);
+
+// Execute the operation by passing it to Maintenance.SendAsync
+await store.Maintenance.SendAsync(putSortersOp);
+`}
+
+
+
+
+
+
+You can now order your query results using the custom sorter.
+A query example is available [here](../../../../client-api/session/querying/sort-query-results.mdx#custom-sorters).
+
+
+
+
+
+## Syntax
+
+
+
+{`public PutSortersOperation(params SorterDefinition[] sortersToAdd)
+`}
+
+
+
+| Parameter | Type | Description |
+|-------------------|----------------------|------------------------------------------------------|
+| __sortersToAdd__ | `SorterDefinition[]` | One or more Sorter Definitions to send to the server |
+
+
+
+
+{`public class SorterDefinition
+\{
+ public string Name \{ get; set; \}
+ public string Code \{ get; set; \}
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/sorters/_put-sorter-nodejs.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/sorters/_put-sorter-nodejs.mdx
new file mode 100644
index 0000000000..434d8e8eda
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/sorters/_put-sorter-nodejs.mdx
@@ -0,0 +1,85 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* The Lucene indexing engine allows you to create your own __Custom Sorters__
+ where you can define how query results will be ordered based on your specific requirements.
+
+* Use `PutSortersOperation` to deploy a custom sorter to the RavenDB server.
+ Once deployed, you can use it to sort query results for all queries made on the __database__
+ that is scoped to your [Document Store](../../../../client-api/setting-up-default-database.mdx).
+
+* To deploy a custom sorter that will apply cluster-wide, to all databases, see [put server-wide sorter](../../../../client-api/operations/server-wide/sorters/put-sorter-server-wide.mdx).
+
+* A custom sorter can also be uploaded to the server from the [Studio](../../../../studio/database/settings/custom-sorters.mdx).
+
+* In this page:
+ * [Put custom sorter](../../../../client-api/operations/maintenance/sorters/put-sorter.mdx#put-custom-sorter)
+ * [Syntax](../../../../client-api/operations/maintenance/sorters/put-sorter.mdx#syntax)
+
+
+## Put custom sorter
+
+* First, create your own sorter class that inherits from the Lucene class [Lucene.Net.Search.FieldComparator](https://lucenenet.apache.org/docs/3.0.3/df/d91/class_lucene_1_1_net_1_1_search_1_1_field_comparator.html).
+
+* Then, send the custom sorter to the server using the `PutSortersOperation`.
+
+
+
+{`// Create the sorter definition object
+const sorterDefinition = \{
+ // The sorter name must be the same as the sorter's class name in your code
+ name: "MySorter",
+ // The code must be compilable and include all necessary using statements (C# code)
+ code: ""
+\};
+
+// Define the put sorters operation, pass the sorter definition
+const putSorterOp = new PutSortersOperation(sorterDefinition);
+
+// Execute the operation by passing it to maintenance.send
+await documentStore.maintenance.send(putSorterOp);
+`}
+
+
+
+
+
+You can now order your query results using the custom sorter.
+A query example is available [here](../../../../client-api/session/querying/sort-query-results.mdx#custom-sorters).
+
+
+
+
+
+## Syntax
+
+
+
+{`const putSorterOp = new PutSortersOperation(sortersToAdd);
+`}
+
+
+
+| Parameter | Type | Description |
+|-------------------|---------------|-------------------------------------------------------------|
+| __sortersToAdd__ | `...object[]` | One or more Sorter Definition objects to send to the server |
+
+
+
+
+{`// The sorter definition object
+\{
+ name: string;
+ code: string;
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/maintenance/sorters/put-sorter.mdx b/versioned_docs/version-7.1/client-api/operations/maintenance/sorters/put-sorter.mdx
new file mode 100644
index 0000000000..b1ffdf268f
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/maintenance/sorters/put-sorter.mdx
@@ -0,0 +1,39 @@
+---
+title: "Put Custom Sorter Operation"
+hide_table_of_contents: true
+sidebar_label: Put Custom Sorter
+sidebar_position: 0
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import PutSorterCsharp from './_put-sorter-csharp.mdx';
+import PutSorterNodejs from './_put-sorter-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/patching/_category_.json b/versioned_docs/version-7.1/client-api/operations/patching/_category_.json
new file mode 100644
index 0000000000..f0be7deba4
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/patching/_category_.json
@@ -0,0 +1,4 @@
+{
+ "position": 7,
+ "label": Patching,
+}
diff --git a/versioned_docs/version-7.1/client-api/operations/patching/_set-based-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/patching/_set-based-csharp.mdx
new file mode 100644
index 0000000000..3b228ced53
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/patching/_set-based-csharp.mdx
@@ -0,0 +1,326 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+Sometimes we need to update a large number of documents matching certain criteria. A simple SQL query doing that will look like this:
+
+`UPDATE Users SET IsActive = 0 WHERE LastLogin < '2020-01-01'`
+
+This is usually not the case for NoSQL databases where set based operations are not supported. RavenDB does support them by passing it a query and an operation definition. It will run the query and perform that operation on its results.
+
+The same queries and indexes that are used for data retrieval are used for the set based operations. The syntax defining which documents to work on is exactly the same as you'd specified for those documents to be pulled from the store.
+
+In this page:
+[Syntax overview](../../../client-api/operations/patching/set-based.mdx#syntax-overview)
+[Examples](../../../client-api/operations/patching/set-based.mdx#examples)
+[Additional notes](../../../client-api/operations/patching/set-based.mdx#additional-notes)
+
+
+
+## Syntax overview
+
+### Sending a Patch Request
+
+
+
+{`Operation Send(PatchByQueryOperation operation);
+`}
+
+
+
+| Parameter | | |
+| ------------- | ------------- | ----- |
+| **operation** | `PatchByQueryOperation` | PatchByQueryOperation object, describing the query and the patch that will be performed |
+
+| Return Value | |
+| ------------- | ----- |
+| `Operation` | Object that allows waiting for operation to complete. It also may return information about a performed patch: see examples below. |
+
+### PatchByQueryOperation
+
+
+
+{`public PatchByQueryOperation(string queryToUpdate)
+`}
+
+
+
+
+
+{`public PatchByQueryOperation(IndexQuery queryToUpdate, QueryOperationOptions options = null)
+`}
+
+
+
+| Parameter | Type | Description |
+|-------------------|-------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **queryToUpdate** | `string` | The query & patch definition. The RQL query starts as any other RQL query with a "from" statement. It continues with an "update" clause that contains the Javascript patching code. |
+| **queryToUpdate** | `IndexQuery` | Object containing the query & the patching string, with the option to use parameters. |
+| **options** | `QueryOperationOptions` | Options defining how the operation will be performed and various constraints on how it is performed. Default: `null` |
+
+
+
+## Examples
+
+### Update whole collection
+
+
+{`// increase by 10 Freight field in all orders
+var operation = store
+ .Operations
+ .Send(new PatchByQueryOperation(@"from Orders as o
+ update
+ \{
+ o.Freight +=10;
+ \}"));
+// Wait for the operation to be complete on the server side.
+// Not waiting for completion will not harm the patch process and it will continue running to completion.
+operation.WaitForCompletion();
+`}
+
+
+
+### Update by dynamic query
+
+
+{`// set discount to all orders that was processed by a specific employee
+var operation = store
+ .Operations
+ .Send(new PatchByQueryOperation(@"from Orders as o
+ where o.Employee = 'employees/4-A'
+ update
+ \{
+ o.Lines.forEach(line=> line.Discount = 0.3);
+ \}"));
+operation.WaitForCompletion();
+`}
+
+
+
+### Update by static index query result
+
+
+{`// switch all products with supplier 'suppliers/12-A' with 'suppliers/13-A'
+var operation = store
+ .Operations
+ .Send(new PatchByQueryOperation(new IndexQuery
+ \{
+ Query = @"from index 'Product/Search' as p
+ where p.Supplier = 'suppliers/12-A'
+ update
+ \{
+ p.Supplier = 'suppliers/13-A'
+ \}"
+ \}));
+
+operation.WaitForCompletion();
+`}
+
+
+
+### Updating a collection name
+
+
+{`// delete the document before recreating it with a different collection name
+var operation = store
+ .Operations
+ .Send(new PatchByQueryOperation(new IndexQuery
+ \{
+ Query = @"from Orders as c
+ update
+ \{
+ del(id(c));
+ this[""@metadata""][""@collection""] = ""New_Orders"";
+ put(id(c), this);
+ \}"
+ \}));
+
+operation.WaitForCompletion();
+`}
+
+
+
+### Updating by document ID
+
+
+{`// perform a patch by document ID
+var operation = store
+ .Operations
+ .Send(new PatchByQueryOperation(new IndexQuery
+ \{
+ Query = @"from @all_docs as d
+ where id() in ('orders/1-A', 'companies/1-A')
+ update
+ \{
+ d.Updated = true;
+ \}"
+ \}));
+
+operation.WaitForCompletion();
+`}
+
+
+
+### Updating by document ID using parameters
+
+
+{`// perform a patch by document ID
+var operation = store
+ .Operations
+ .Send(new PatchByQueryOperation(new IndexQuery
+ \{
+ QueryParameters = new Parameters
+ \{
+ \{"ids", new[] \{"orders/1-A", "companies/1-A"\}\}
+ \},
+ Query = @"from @all_docs as d
+ where id() in ($ids)
+ update
+ \{
+ d.Updated = true;
+ \}"
+ \}));
+
+operation.WaitForCompletion();
+`}
+
+
+
+### Updating all documents
+
+
+{`// perform a patch on all documents using @all_docs keyword
+var operation = store
+ .Operations
+ .Send(new PatchByQueryOperation(new IndexQuery
+ \{
+ Query = @"from @all_docs
+ update
+ \{
+ this.Updated = true;
+ \}"
+ \}));
+
+operation.WaitForCompletion();
+`}
+
+
+
+### Patch on stale results
+
+
+{`// patch on stale results
+var operation = store
+ .Operations
+ .Send(new PatchByQueryOperation(new IndexQuery
+ \{
+ Query = @"from Orders as o
+ where o.Company = 'companies/12-A'
+ update
+ \{
+ o.Company = 'companies/13-A'
+ \}"
+ \},
+ new QueryOperationOptions
+ \{
+ AllowStale = true
+ \}));
+
+operation.WaitForCompletion();
+`}
+
+
+
+### Report progress on patch
+
+
+{`// report progress during patch processing
+var operation = store
+ .Operations
+ .Send(new PatchByQueryOperation(new IndexQuery
+ \{
+ Query = @"from Orders as o
+ where o.Company = 'companies/12-A'
+ update
+ \{
+ o.Company = 'companies/13-A'
+ \}"
+ \},
+ new QueryOperationOptions
+ \{
+ AllowStale = true
+ \}));
+
+operation.OnProgressChanged += (sender, x) =>
+\{
+ var det = (DeterminateProgress)x;
+ Console.WriteLine($"Processed: \{det.Processed\}; Total: \{det.Total\}");
+\};
+
+operation.WaitForCompletion();
+`}
+
+
+
+### Process patch results details
+
+
+{`// perform patch and create summary of processing statuses
+var operation = store
+ .Operations
+ .Send(new PatchByQueryOperation(new IndexQuery
+ \{
+ Query = @"from Orders as o
+ where o.Company = 'companies/12-A'
+ update
+ \{
+ o.Company = 'companies/13-A'
+ \}"
+ \},
+ new QueryOperationOptions
+ \{
+ RetrieveDetails = true
+ \}));
+
+var result = operation.WaitForCompletion();
+var formattedResults =
+ result.Details
+ .Select(x => (BulkOperationResult.PatchDetails)x)
+ .GroupBy(x => x.Status)
+ .Select(x => $"\{x.Key\}: \{x.Count()\}").ToList();
+
+formattedResults.ForEach(Console.WriteLine);
+`}
+
+
+
+
+
+## Additional notes
+
+
+
+By default, set based operations will **not work** on indexes that are stale. The operations will **only succeed** if the specified **index is not stale**. This is to make sure you only delete what you intended to delete.
+
+For indexes that are updated all the time, you can set the AllowStale field of QueryOperationOptions to true if you want to patch on stale results.
+
+
+
+
+
+The patching of documents matching a specified query is run in batches of size 1024. RavenDB doesn't do concurrency checks during the operation so it can happen than a document has been updated or deleted meanwhile.
+
+
+
+
+
+The patching of documents matching a specified query is run in batches of size 1024.
+Each batch is handled in a separate write transaction.
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/patching/_set-based-java.mdx b/versioned_docs/version-7.1/client-api/operations/patching/_set-based-java.mdx
new file mode 100644
index 0000000000..05609e94de
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/patching/_set-based-java.mdx
@@ -0,0 +1,257 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+Sometimes we need to update a large number of documents matching certain criteria. A simple SQL query doing that will look like this:
+
+`UPDATE Users SET IsActive = 0 WHERE LastLogin < '2020-01-01'`
+
+This is usually not the case for NoSQL databases where set based operations are not supported. RavenDB does support them by passing it a query and an operation definition. It will run the query and perform that operation on its results.
+
+The same queries and indexes that are used for data retrieval are used for the set based operations. The syntax defining which documents to work on is exactly the same as you'd specified for those documents to be pulled from the store.
+
+In this page:
+[Syntax overview](../../../client-api/operations/patching/set-based.mdx#syntax-overview)
+[Examples](../../../client-api/operations/patching/set-based.mdx#examples)
+[Additional notes](../../../client-api/operations/patching/set-based.mdx#additional-notes)
+
+
+
+## Syntax overview
+
+### Sending a Patch Request
+
+
+
+{`Operation sendAsync(PatchByQueryOperation operation);
+`}
+
+
+
+| Parameter | | |
+| ------------- | ------------- | ----- |
+| **operation** | `PatchByQueryOperation` | PatchByQueryOperation object, describing the query and the patch that will be performed |
+
+| Return Value | |
+| ------------- | ----- |
+| `Operation` | Object that allows waiting for operation to complete. It also may return information about a performed patch: see examples below. |
+
+### PatchByQueryOperation
+
+
+
+{`public PatchByQueryOperation(String queryToUpdate)
+`}
+
+
+
+
+
+{`public PatchByQueryOperation(IndexQuery queryToUpdate);
+
+public PatchByQueryOperation(IndexQuery queryToUpdate, QueryOperationOptions options);
+`}
+
+
+
+| Parameter | Type | Description |
+|-------------------|--------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **queryToUpdate** | `String` or `IndexQuery` | RQL query defining the update operation. The RQL query starts as any other RQL query with "from" and "update" statements. Later, it continues with an "update" clause in which you describe the Javascript patch code. |
+| **options** | `QueryOperationOptions` | Options defining how the operation will be performed and various constraints on how it is performed. |
+
+## Examples
+
+### Update whole collection
+
+
+{`// increase by 10 Freight field in all orders
+Operation operation = store
+ .operations()
+ .sendAsync(new PatchByQueryOperation("from Orders as o update \{" +
+ " o.Freight += 10;" +
+ "\}"));
+
+// Wait for the operation to be complete on the server side.
+// Not waiting for completion will not harm the patch process and it will continue running to completion.
+operation.waitForCompletion();
+`}
+
+
+
+### Update by dynamic query
+
+
+{`Operation operation = store
+ .operations()
+ .sendAsync(new PatchByQueryOperation("from Orders as o" +
+ " where o.Employee = 'employees/1-A'" +
+ " update " +
+ "\{ " +
+ " o.Lines.forEach(line => line.Discount = 0.3);" +
+ "\}"));
+
+operation.waitForCompletion();
+`}
+
+
+
+### Update by static index query result
+
+
+{`// switch all products with supplier 'suppliers/12-A' with 'suppliers/13-A'
+Operation operation = store
+ .operations()
+ .sendAsync(new PatchByQueryOperation(new IndexQuery("" +
+ "from index 'Product/Search' as p " +
+ " where p.Supplier = 'suppliers/12-A'" +
+ " update \{" +
+ " p.Supplier = 'suppliers/13-A'" +
+ "\}")));
+
+
+operation.waitForCompletion();
+`}
+
+
+
+### Updating a collection name
+
+
+{`// delete the document before recreating it with a different collection name
+
+Operation operation = store
+ .operations()
+ .sendAsync(new PatchByQueryOperation(new IndexQuery(
+ "from Orders as c " +
+ "update \{" +
+ " del(id(c));" +
+ " this['@metadata']['collection'] = 'New_Orders'; " +
+ " put(id(c), this); " +
+ "\}"
+ )));
+
+operation.waitForCompletion();
+`}
+
+
+
+### Updating by document ID
+
+
+{`// perform a patch by document ID
+
+Operation operation = store
+ .operations()
+ .sendAsync(new PatchByQueryOperation(new IndexQuery(
+ "from @all_docs as d " +
+ " where id() in ('orders/1-A', 'companies/1-A')" +
+ " update " +
+ "\{" +
+ " d.Updated = true; " +
+ "\} "
+ )));
+
+operation.waitForCompletion();
+`}
+
+
+
+### Updating by document ID using parameters
+
+
+{`// perform a patch by document ID
+IndexQuery indexQuery = new IndexQuery(
+ "from @all_docs as d " +
+ " where id() in ($ids)" +
+ " update " +
+ " \{" +
+ " d.Updated = true; " +
+ "\} "
+);
+Parameters parameters = new Parameters();
+parameters.put("ids", new String[]\{"orders/1-A", "companies/1-A"\});
+indexQuery.setQueryParameters(parameters);
+Operation operation = store
+ .operations()
+ .sendAsync(new PatchByQueryOperation(indexQuery));
+
+operation.waitForCompletion();
+`}
+
+
+
+### Updating all documents
+
+
+{`// perform a patch on all documents using @all_docs keyword
+
+Operation operation = store
+ .operations()
+ .sendAsync(new PatchByQueryOperation(new IndexQuery(
+ "from @all_docs " +
+ " update " +
+ "\{ " +
+ " this.Updated = true;" +
+ "\}"
+ )));
+
+operation.waitForCompletion();
+`}
+
+
+
+### Patch on stale results
+
+
+{`// patch on stale results
+
+QueryOperationOptions options = new QueryOperationOptions();
+options.setAllowStale(true);
+
+Operation operation = store
+ .operations()
+ .sendAsync(new PatchByQueryOperation(new IndexQuery(
+ "from Orders as o " +
+ "where o.Company = 'companies/12-A' " +
+ "update " +
+ "\{ " +
+ " o.Company = 'companies/13-A';" +
+ "\} "
+ ), options));
+
+
+operation.waitForCompletion();
+`}
+
+
+
+
+
+## Additional notes
+
+
+
+By default, set based operations will **not work** on indexes that are stale. The operations will **only succeed** if the specified **index is not stale**. This is to make sure you only delete what you intended to delete.
+
+For indexes that are updated all the time, you can set the AllowStale field of QueryOperationOptions to true if you want to patch on stale results.
+
+
+
+
+
+The patching of documents matching a specified query is run in batches of size 1024. RavenDB doesn't do concurrency checks during the operation so it can happen than a document has been updated or deleted meanwhile.
+
+
+
+
+
+The patching of documents matching a specified query is run in batches of size 1024.
+Each batch is handled in a separate write transaction.
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/patching/_set-based-nodejs.mdx b/versioned_docs/version-7.1/client-api/operations/patching/_set-based-nodejs.mdx
new file mode 100644
index 0000000000..66cbf27c7d
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/patching/_set-based-nodejs.mdx
@@ -0,0 +1,402 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Set-based patch operations allow you to apply changes to a set of documents that match specific criteria instead of separately targeting each document.
+
+* To perform patch operations on a single document see [Single Document Patch Operations](../../../client-api/operations/patching/single-document.mdx).
+ Set-based patching can also be done from the [Studio](../../../studio/database/documents/patch-view.mdx).
+
+* In this page:
+ * [Overview](../../../client-api/operations/patching/set-based.mdx#overview)
+ * [Defining set-based patching](../../../client-api/operations/patching/set-based.mdx#defining-set-based-patching)
+ * [Important characteristics](../../../client-api/operations/patching/set-based.mdx#important-characteristics)
+ * [Examples](../../../client-api/operations/patching/set-based.mdx#examples)
+ * [Update by collection query](../../../client-api/operations/patching/set-based.mdx#update-by-collection-query)
+ * [Update by collection query - access metadata](../../../client-api/operations/patching/set-based.mdx#update-by-collection-query---access-metadata)
+ * [Update by dynamic query](../../../client-api/operations/patching/set-based.mdx#update-by-dynamic-query)
+ * [Update by static index query](../../../client-api/operations/patching/set-based.mdx#update-by-static-index-query)
+ * [Update all documents](../../../client-api/operations/patching/set-based.mdx#update-all-documents)
+ * [Update by document ID](../../../client-api/operations/patching/set-based.mdx#update-by-document-id)
+ * [Update by document ID using parameters](../../../client-api/operations/patching/set-based.mdx#update-by-document-id-using-parameters)
+ * [Allow updating stale results](../../../client-api/operations/patching/set-based.mdx#allow-updating-stale-results)
+ * [Syntax](../../../client-api/operations/patching/set-based.mdx#syntax)
+ * [Send syntax](../../../client-api/operations/patching/set-based.mdx#send-syntax)
+ * [PatchByQueryOperation syntax](../../../client-api/operations/patching/set-based.mdx#syntax)
+
+
+## Overview
+
+
+
+ __Defining set-based patching__:
+ * In other databases, a simple SQL query that updates a set of documents can look like this:
+ `UPDATE Users SET IsActive = 0 WHERE LastLogin < '2020-01-01'`
+
+ * To achieve that in RavenDB, define the following two components within a `PatchByQueryOperation`:
+
+ 1. __The query__:
+ An [RQL](../../../client-api/session/querying/what-is-rql.mdx) query that defines the set of documents to update.
+ Use the exact same syntax as you would when querying the database/indexes for usual data retrieval.
+
+ 2. __The update__:
+ A JavaScript clause that defines the updates to perform on the documents resulting from the query.
+
+ * When sending the `PatchByQueryOperation` to the server, the server will run the query and perform the requested update on the query results.
+
+
+
+{`// A "query & update" sample
+// Update the set of documents from the Orders collection that match the query criteria:
+// =====================================================================================
+
+// The RQL part:
+from Orders where Freight < 10
+
+// The UPDATE part:
+update \{
+ this.Freight += 10;
+\}
+`}
+
+
+
+
+
+
+ __Important characteristics__:
+* __Transactional batches__:
+ The patching of documents matching a specified query is run in batches of size 1024.
+ Each batch is handled in a separate write transaction.
+
+* __Dynamic behavior__:
+ During the patching process, documents that are added/modified after the patching operation has started
+ may also be patched if they match the query criteria.
+
+* __Concurrency__:
+ RavenDB doesn't perform concurrency checks during the patching process so it can happen that a document
+ has been modified or deleted while patching is in progress.
+
+* __Patching stale indexes__:
+ By default, set-based patch operations will only succeed if the index is Not [stale](../../../indexes/stale-indexes.mdx).
+ For indexes that are frequently updated, you can explicitly allow patching on stale results if needed.
+ An example can be seen in the [Allow updating stale results](../../../client-api/operations/patching/set-based.mdx#allow-updating-stale-results) example.
+
+* __Manage lengthy patch operations__:
+ The set-based patch operation (`PatchByQueryOperation`) runs in the server background may take a long time to complete.
+ Executing the operation via the `Send` method return an object that can be __awaited for completion__ or __aborted__ (killed).
+ Learn more about this and see dedicated examples in [Manage length operations](../../../client-api/operations/what-are-operations.mdx#manage-lengthy-operations).
+
+
+
+
+
+## Examples
+
+
+
+ __Update by collection query__:
+
+
+{`// Update all documents in a collection
+// ====================================
+
+// Define the Patch by Query Operation, pass the "query & update" string:
+const patchByQueryOp = new PatchByQueryOperation(
+ \`from Orders as o
+ update
+ \{
+ // Increase the Freight in ALL documents in the Orders collection:
+ o.Freight += 10;
+ \}\`);
+
+// Execute the operation by passing it to operations.send:
+const operation = await documentStore.operations.send(patchByQueryOp);
+`}
+
+
+
+
+
+
+ __Update by collection query - access metadata__:
+
+
+{`// Update the collection name for all documents in the collection
+// ==============================================================
+
+// Delete the document before recreating it with a different collection name:
+const patchByQueryOp = new PatchByQueryOperation(
+ \`from Orders as c
+ update
+ \{
+ del(id(c));
+ this["@metadata"]["@collection"] = "New_Orders";
+ put(id(c), this);
+ \}\`);
+
+const operation = await documentStore.operations.send(patchByQueryOp);
+`}
+
+
+
+
+
+
+ __Update by dynamic query__:
+
+
+{`// Update all documents matching a dynamic query
+// =============================================
+
+// Update the Discount in all orders that match the dynamic query predicate:
+const patchByQueryOp = new PatchByQueryOperation(\`from Orders as o
+ where o.Employee = 'employees/4-A'
+ update
+ \{
+ o.Lines.forEach(line=> line.Discount = 0.3);
+ \}\`);
+
+const operation = await documentStore.operations.send(patchByQueryOp);
+
+// Note: An AUTO-INDEX will be created when the dynamic query is executed on the server.
+`}
+
+
+
+
+
+
+ __Update by static index query__:
+
+
+
+{`// Update all documents matching a static index query
+// ==================================================
+
+// Modify the Supplier to 'suppliers/13-A' for all products that have 'suppliers/12-A':
+const patchByQueryOp = new PatchByQueryOperation(\`from index 'Products/BySupplier' as p
+ where p.Supplier = 'suppliers/12-A'
+ update
+ {
+ p.Supplier = 'suppliers/13-A'
+ }\`);
+
+const operation = await documentStore.operations.send(patchByQueryOp);
+`}
+
+
+
+
+{`class Products_BySupplier extends AbstractJavaScriptIndexCreationTask {
+ constructor() {
+ super();
+
+ // Define the index-fields
+ this.map("Products", p => ({
+ Supplier : e.Supplier
+ }));
+ }
+}
+`}
+
+
+
+
+
+
+
+ __Update all documents__:
+
+
+{`// Update all documents matching an @all_docs query
+// ================================================
+
+// Patch the 'Updated' field to ALL documents (query is using the @all_docs keyword):
+const patchByQueryOp = new PatchByQueryOperation(\`from @all_docs
+ update
+ \{
+ this.Updated = true;
+ \}\`);
+
+const operation = await documentStore.operations.send(patchByQueryOp);
+`}
+
+
+
+
+
+
+ __Update by document ID__:
+
+
+{`// Update all documents matching a query by ID
+// ===========================================
+
+// Patch the 'Updated' field to all documents that have the specified IDs:
+const patchByQueryOp = new PatchByQueryOperation(\`from @all_docs as d
+ where id() in ('orders/1-A', 'companies/1-A')
+ update
+ \{
+ d.Updated = true;
+ \}\`);
+
+const operation = await documentStore.operations.send(patchByQueryOp);
+`}
+
+
+
+
+
+
+ __Update by document ID using parameters__:
+
+
+{`// Update all documents matching a query by ID using query parmeters
+// =================================================================
+
+// Define an IndexQuery object:
+const indexQuery = new IndexQuery();
+
+// Define the "query & update" string
+// Patch the 'Updated' field to all documents that have the specified IDs
+// Parameter ($ids) contains the listed IDs:
+indexQuery.query = \`from @all_docs as d
+ where id() in ($ids)
+ update \{
+ d.Updated = true
+ \}\`;
+
+// Define the parameters for the script:
+indexQuery.queryParameters = \{
+ ids: ["orders/830-A", "companies/91-A"]
+\};
+
+// Pass the indexQuery to the operation definition
+const patchByQueryOp = new PatchByQueryOperation(indexQuery);
+
+// Execute the operation
+const operation = await documentStore.operations.send(patchByQueryOp);
+`}
+
+
+
+
+
+
+ __Allow updating stale results__:
+* Set `allowStale` to _true_ to allow patching of stale results.
+
+* The RQL in this example is using an auto-index.
+ Use _allowStale_ in exactly the same way when querying a static-index.
+
+
+
+{`// Update documents matching a dynamic query even if auot-index is stale
+// =====================================================================
+
+// Define an IndexQuery object:
+const indexQuery = new IndexQuery();
+
+// Define the "query & update" string
+// Modify company to 'companies/13-A' for all orders that have 'companies/12-A':
+indexQuery.query = \`from Orders as o
+ where o.Company = 'companies/12-A'
+ update
+ \{
+ o.Company = 'companies/13-A'
+ \}\`;
+
+// Define query options:
+const queryOptions = \{
+ // The query uses an auto-index (index is created if it doesn't exist yet).
+ // Allow patching on all matching documents even if the auto-index is still stale.
+ allowStale: true
+\};
+
+// Pass indexQuery & queryOptions to the operation definition
+const patchByQueryOp = new PatchByQueryOperation(indexQuery, queryOptions);
+
+// Execute the operation
+const operation = await documentStore.operations.send(patchByQueryOp);
+`}
+
+
+
+
+
+
+## Syntax
+#### Send syntax
+
+
+
+{`await send(operation);
+`}
+
+
+
+| Parameter | Type | Description |
+|---------------|-------------------------|---------------------------------------------------------------------|
+| __operation__ | `PatchByQueryOperation` | The operation object describing the query and the patch to perform. |
+
+| Return value | |
+|---------------------------------------|-----------------------------------------------------------------------------------------|
+| `Promise` | A promise that resolves to an object that allows waiting for the operation to complete. |
+#### PatchByQueryOperation syntax
+
+
+
+{`// Available overload:
+// ===================
+patchByQueryOp = new PatchByQueryOperation(queryToUpdate);
+patchByQueryOp = new PatchByQueryOperation(queryToUpdate, options);
+`}
+
+
+
+| Parameter | Type | Description |
+|-------------------|--------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| __queryToUpdate__ | `string` | The query & patch definition. The RQL query starts as any other RQL query with a "from" statement. It continues with an "update" clause that contains the Javascript patching code. |
+| __queryToUpdate__ | `IndexQuery` | Object containing the query & the patching string, with the option to use parameters. |
+| __options__ | `object` | Options for the _PatchByQueryOperation_. |
+
+
+
+
+{`class IndexQuery \{
+ query; // string
+ queryParameters; // Record
+\}
+`}
+
+
+
+
+
+{`// Options for 'PatchByQueryOperation'
+\{
+ // Limit the amount of base operation per second allowed.
+ maxOpsPerSecond; // number
+
+ // Indicate whether operations are allowed on stale indexes.
+ allowStale; // boolean
+
+ // If AllowStale is set to false and index is stale,
+ // then this is the maximum timeout to wait for index to become non-stale.
+ // If timeout is exceeded then exception is thrown.
+ staleTimeout; // number
+
+ // Set whether operation details about each document should be returned by server.
+ retrieveDetails; // boolean
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/patching/_single-document-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/patching/_single-document-csharp.mdx
new file mode 100644
index 0000000000..4726b95eb9
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/patching/_single-document-csharp.mdx
@@ -0,0 +1,1230 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* The __Patch__ operation is used to perform _partial_ document updates with __one trip to the server__,
+ instead of loading, modifying, and saving a full document.
+ The whole operation is executed on the server-side and is useful as a performance enhancement or for
+ updating denormalized data in entities.
+
+* Since the operation is executed in a single request to the database,
+ the patch command is performed in a single write [transaction](../../../client-api/faq/transaction-support.mdx).
+
+* The current page covers patch operations on single documents.
+
+* Patching has three possible interfaces: [Session API](../../../client-api/operations/patching/single-document.mdx#session-api),
+[Session API using Defer](../../../client-api/operations/patching/single-document.mdx#session-api-using-defer),
+and [Operations API](../../../client-api/operations/patching/single-document.mdx#operations-api).
+
+* Patching can be done from the [client API](../../../client-api/operations/patching/single-document.mdx#examples) as well as in the [studio](../../../studio/database/documents/patch-view.mdx).
+
+In this page:
+
+* [API overview](../../../client-api/operations/patching/single-document.mdx#api-overview)
+ * [Session API](../../../client-api/operations/patching/single-document.mdx#session-api)
+ * [Session API using Defer](../../../client-api/operations/patching/single-document.mdx#session-api-using-defer)
+ * [Operations API](../../../client-api/operations/patching/single-document.mdx#operations-api)
+ * [List of script methods](../../../client-api/operations/patching/single-document.mdx#list-of-script-methods)
+* [Examples](../../../client-api/operations/patching/single-document.mdx#examples)
+ * [Change value of single field](../../../client-api/operations/patching/single-document.mdx#change-value-of-single-field)
+ * [Change values of two fields](../../../client-api/operations/patching/single-document.mdx#change-values-of-two-fields)
+ * [Increment value](../../../client-api/operations/patching/single-document.mdx#increment-value)
+ * [Add or increment](../../../client-api/operations/patching/single-document.mdx#add-or-increment)
+ * [Add or patch](../../../client-api/operations/patching/single-document.mdx#add-or-patch)
+ * [Add or patch to an existing array](../../../client-api/operations/patching/single-document.mdx#add-or-patch-to-an-existing-array)
+ * [Add item to array](../../../client-api/operations/patching/single-document.mdx#add-item-to-array)
+ * [Insert item into specific position in array](../../../client-api/operations/patching/single-document.mdx#insert-item-into-specific-position-in-array)
+ * [Modify item in specific position in array](../../../client-api/operations/patching/single-document.mdx#modify-item-in-specific-position-in-array)
+ * [Remove items from array](../../../client-api/operations/patching/single-document.mdx#remove-items-from-array)
+ * [Loading documents in a script](../../../client-api/operations/patching/single-document.mdx#loading-documents-in-a-script)
+ * [Remove property](../../../client-api/operations/patching/single-document.mdx#remove-property)
+ * [Rename property](../../../client-api/operations/patching/single-document.mdx#rename-property)
+ * [Add document](../../../client-api/operations/patching/single-document.mdx#add-document)
+ * [Clone document](../../../client-api/operations/patching/single-document.mdx#clone-document)
+ * [Increment counter](../../../client-api/operations/patching/single-document.mdx#increment-counter)
+ * [Delete counter](../../../client-api/operations/patching/single-document.mdx#delete-counter)
+ * [Get counter](../../../client-api/operations/patching/single-document.mdx#get-counter)
+ * [Patching using inline string compilation](../../../client-api/operations/patching/single-document.mdx#patching-using-inline-string-compilation)
+
+
+
+## API Overview
+
+## Session API
+
+A type-safe session interface that allows performing the most common patch operations.
+The patch request will be sent to the server only when calling `SaveChanges`.
+This way it's possible to perform multiple operations in one request to the server.
+
+
+
+### Increment field value
+`Session.Advanced.Increment`
+
+
+{`void Increment(T entity, Expression> fieldPath, U delta);
+
+void Increment(string id, Expression> fieldPath, U delta);
+`}
+
+
+
+| Parameters | Type | Description |
+| ------------- | ------------- | ----- |
+| **T** | `Type` | Entity type |
+| **U** | `Type` | Field type, must be of numeric type or a `string` of `char` for string concatenation |
+| **entity** | `T` | Entity on which the operation should be performed. The entity should be one that was returned by the current session in a `Load` or `Query` operation, this way, the session can track down the entity's ID |
+| **entity id** | `string` | Entity ID on which the operation should be performed. |
+| **fieldPath** | `Expression>` | Lambda describing the path to the field. |
+| **delta** | `U` | Value to be added. |
+
+* Note how numbers are handled with the [JavaScript engine](../../../server/kb/numbers-in-ravendb.mdx) in RavenDB.
+`Session.Advanced.AddOrIncrement`
+
+
+{`void AddOrIncrement(string id, T entity, Expression> path, TU valToAdd);
+`}
+
+
+
+| Parameters | Type | Description |
+| ------------- | ------------- | ----- |
+| **T** | `Type` | Entity type |
+| **TU** | `Type` | Field type, must be of numeric type or a `string` of `char` for string concatenation |
+| **entity** | `T` | Entity on which the operation should be performed. The entity should be one that was returned by the current session in a `Load` or `Query` operation, this way, the session can track down the entity's ID |
+| **entity id** | `string` | Entity ID on which the operation should be performed. |
+| **path** | `Expression>` | Lambda describing the path to the field. |
+| **valToAdd** | `U` | Value to be added. |
+
+
+
+
+
+### Set field value
+`Session.Advanced.Patch`
+
+
+{`void Patch(string id, Expression> fieldPath, U value);
+
+void Patch(T entity, Expression> fieldPath, U value);
+`}
+
+
+
+| Parameters | Type | Description |
+| ------------- | ------------- | ----- |
+| **T** | `Type` | Entity type |
+| **U** | `Type` | Field type|
+| **entity** | `T` | Entity on which the operation should be performed. The entity should be one that was returned by the current session in a `Load` or `Query` operation. This way the session can track down the entity's ID. |
+| **entity id** | `string` | Entity ID on which the operation should be performed. |
+| **fieldPath** | `Expression>` | Lambda describing the path to the field. |
+| **delta** | `U` | Value to set. |
+`Session.Advanced.AddOrPatch`
+
+
+
+{`void AddOrPatch(string id, T entity, Expression> path, TU value);
+`}
+
+
+
+| Parameters | Type | Description |
+| ------------- | ------------- | ----- |
+| **T** | `Type` | Entity type |
+| **TU** | `Type` | Field type|
+| **entity** | `T` | Entity on which the operation should be performed. The entity should be one that was returned by the current session in a `Load` or `Query` operation. This way the session can track down the entity's ID. |
+| **entity id** | `string` | Entity ID on which the operation should be performed. |
+| **fieldPath** | `Expression>` | Lambda describing the path to the field. |
+| **value** | `U` | Value to set. |
+
+
+
+
+
+### Array manipulation
+`Session.Advanced.Patch`
+
+
+{`void Patch(T entity, Expression>> fieldPath,
+ Expression, object>> arrayModificationLambda);
+
+void Patch(string id, Expression>> fieldPath,
+ Expression, object>> arrayModificationLambda);
+`}
+
+
+
+| Parameters | Type | Description |
+|------------------------------| ------------- | ----- |
+| **T** | `Type` | Entity type |
+| **U** | `Type` | Field type|
+| **entity** | `T` | Entity on which the operation should be performed. The entity should be one that was returned by the current session in a `Load` or `Query` operation. This way the session can track down the entity's ID. |
+| **entity id** | `string` | Entity ID on which the operation should be performed. |
+| **fieldPath** | `Expression>` | Lambda describing the path to the field. |
+| **arrayModificationLambda** | `Expression, object>>` | Lambda that modifies the array, see `JavaScriptArray` below. |
+`Session.Advanced.AddOrPatch`
+
+
+{`void AddOrPatch(string id, T entity, Expression>> path,
+ Expression, object>> arrayAdder);
+`}
+
+
+
+| Parameters | Type | Description |
+| ------------- | ------------- | ----- |
+| **T** | `Type` | Entity type |
+| **TU** | `Type` | Field type|
+| **entity** | `T` | Entity on which the operation should be performed. The entity should be one that was returned by the current session in a `Load` or `Query` operation. This way the session can track down the entity's ID. |
+| **entity id** | `string` | Entity ID on which the operation should be performed. |
+| **path** | `Expression>` | Lambda describing the path to the field. |
+| **Expression<Func<JavaScriptArray>** | `Expression, object>>` | Lambda that modifies the array, see `JavaScriptArray` below. |
+| **arrayAdder** | `Add()` | Values to add to array. |
+
+
+
+`JavaScriptArray` allows building lambdas representing array manipulations for patches.
+
+| Method Signature| Return Type | Description |
+|--------|:-----|-------------|
+| **Put(T item)** | `JavaScriptArray` | Allows adding `item` to an array. |
+| **Put(params T[] items)** | `JavaScriptArray` | Items to be added to the array. |
+| **RemoveAt(int index)** | `JavaScriptArray` | Removes item in position `index` in array. |
+| **RemoveAll(Func<T, bool> predicate)** | `JavaScriptArray` | Removes all the items in the array that satisfy the given predicate. |
+
+
+
+
+
+
+
+## Session API using Defer
+
+The non-typed Session API for patches uses the `Session.Advanced.Defer` function which allows registering one or more commands.
+One of the possible commands is the `PatchCommandData`, describing single document patch command.
+The patch request will be sent to the server only when calling `SaveChanges`, this way it's possible to perform multiple operations in one request to the server.
+
+`Session.Advanced.Defer`
+
+
+{`void Defer(ICommandData[] commands);
+`}
+
+
+
+
+
+#### PatchCommandData
+
+| Constructor | Type | Description |
+|--------------------|----------------|------------------------------------------------------------------------------------------------------------------------------------------|
+| **id** | `string` | ID of the document to be patched. |
+| **changeVector** | `string` | [Can be null] Change vector of the document to be patched, used to verify that the document was not changed before the patch reached it. |
+| **patch** | `PatchRequest` | Patch request to be performed on the document. |
+| **patchIfMissing** | `PatchRequest` | [Can be null] Patch request to be performed if no document with the given ID was found. |
+
+
+
+
+
+#### PatchRequest
+
+We highly recommend using scripts with parameters. This allows RavenDB to cache scripts and boost performance.
+Parameters can be accessed in the script through the `args` object and passed using PatchRequest's "Values" parameter.
+
+| Property | Type | Description |
+|------------|------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **Script** | `string` | The patching script, written in JavaScript. |
+| **Values** | `Dictionary` | Parameters to be passed to the script. The parameters can be accessed using the '$' prefix. Parameter starting with a '$' will be used as is, without further concatenation. |
+
+
+
+
+
+## Operations API
+
+An operations interface that exposes the full functionality and allows performing ad-hoc patch operations without creating a session.
+
+`Raven.Client.Documents.Operations.Send`
+`Raven.Client.Documents.Operations.SendAsync`
+
+
+
+{`PatchStatus Send(PatchOperation operation);
+
+Task SendAsync(PatchOperation operation,
+ SessionInfo sessionInfo = null,
+ CancellationToken token = default(CancellationToken));
+`}
+
+
+
+
+
+| Constructor | Type | Description |
+|-------------------------------------|----------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **id** | `string` | ID of the document to be patched. |
+| **changeVector** | `string` | Change vector of the document to be patched. Used to verify that the document was not modified before the patch reached it. Can be `null`. |
+| **patch** | `PatchRequest` | Patch request to perform on the document. |
+| **patchIfMissing** | `PatchRequest` | Patch request to perform if the specified document is not found. Will run only if no `changeVector` was passed. Can be `null`. |
+| **skipPatchIfChangeVectorMismatch** | `bool` | `true` - do not patch if the document has been modified. `false` (Default) - execute the patch even if document has been modified.
An exception is thrown if: this param is `false` + `changeVector` has value + document with that ID and change vector was not found. |
+
+
+
+
+
+## List of script methods
+
+This is a list of a few of the javascript methods that can be used in patch scripts.
+See the more comprehensive list at [Knowledge Base: JavaScript Engine](../../../server/kb/javascript-engine.mdx#predefined-javascript-functions).
+
+| Method | Arguments | Description |
+|----------------------|-----------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **load** | `string` or `string[]` | Loads one or more documents into the context of the script by their document IDs |
+| **loadPath** | A document and a path to an ID within that document | Loads a related document by the path to its ID |
+| **del** | Document ID; change vector | Delete the given document by its ID. If you add the expected change vector and the document's current change vector does not match, the document will _not_ be deleted. |
+| **put** | Document ID; document; change vector | Create or overwrite a document with a specified ID and entity. If you try to overwrite an existing document and pass the expected change vector, the put will fail if the specified change vector does not match the document's current change vector. |
+| **cmpxchg** | Key | Load a compare exchange value into the context of the script using its key |
+| **getMetadata** | Document | Returns the document's metadata |
+| **id** | Document | Returns the document's ID |
+| **lastModified** | Document | Returns the `DateTime` of the most recent modification made to the given document |
+| **counter** | Document; counter name | Returns the value of the specified counter in the specified document |
+| **counterRaw** | Document; counter name | Returns the specified counter in the specified document as a key-value pair |
+| **incrementCounter** | Document; counter name | Increases the value of the counter by one |
+| **deleteCounter** | Document; counter name | Deletes the counter |
+| **spatial.distance** | Two points by latitude and longitude; spatial units | Find the distance between to points on the earth |
+| **timeseries** | Document; the time series' name | Returns the specified time series object |
+
+
+
+## Examples
+
+### Change value of single field
+
+
+
+
+{`// change FirstName to Robert
+session.Advanced.Patch(
+ "employees/1",
+ x => x.FirstName, "Robert");
+
+session.SaveChanges();
+`}
+
+
+
+
+{`// change FirstName to Robert
+session.Advanced.Defer(new PatchCommandData(
+ id: "employees/1",
+ changeVector: null,
+ patch: new PatchRequest
+ {
+ Script = @"this.FirstName = args.FirstName;",
+ Values =
+ {
+ {"FirstName", "Robert"}
+ }
+ },
+ patchIfMissing: null));
+
+session.SaveChanges();
+`}
+
+
+
+
+{`// change FirstName to Robert
+store.Operations.Send(new PatchOperation(
+ id: "employees/1",
+ changeVector: null,
+ patch: new PatchRequest
+ {
+ Script = @"this.FirstName = args.FirstName;",
+ Values =
+ {
+ {"FirstName", "Robert"}
+ }
+ },
+ patchIfMissing: null));
+`}
+
+
+
+### Change values of two fields
+
+
+
+
+{`// Modify FirstName to Robert and LastName to Carter in single request
+// ===================================================================
+
+// The two Patch operations below are sent via 'SaveChanges()' which complete transactionally,
+// as this call generates a single HTTP request to the database.
+// Either both will succeed or both will be rolled back since they are applied within the same transaction.
+// However, on the server side, the two Patch operations are still executed separately.
+// To achieve atomicity at the level of a single server-side operation, use 'Defer' or the operations syntax.
+
+session.Advanced.Patch("employees/1", x => x.FirstName, "Robert");
+session.Advanced.Patch("employees/1", x => x.LastName, "Carter");
+
+session.SaveChanges();
+`}
+
+
+
+
+{`// Change FirstName to Robert and LastName to Carter in single request
+// Note that here we do maintain the atomicity of the operation
+session.Advanced.Defer(new PatchCommandData(
+ id: "employees/1",
+ changeVector: null,
+ patch: new PatchRequest
+ {
+ Script = @"
+ this.FirstName = args.UserName.FirstName;
+ this.LastName = args.UserName.LastName;",
+ Values =
+ {
+ {
+ "UserName", new
+ {
+ FirstName = "Robert",
+ LastName = "Carter"
+ }
+ }
+ }
+ },
+ patchIfMissing: null));
+
+session.SaveChanges();
+`}
+
+
+
+
+{`// Change FirstName to Robert and LastName to Carter in single request
+// Note that here we do maintain the atomicity of the operation
+store.Operations.Send(new PatchOperation(
+ id: "employees/1",
+ changeVector: null,
+ patch: new PatchRequest
+ {
+ Script = @"
+ this.FirstName = args.UserName.FirstName;
+ this.LastName = args.UserName.LastName;",
+ Values =
+ {
+ {
+ "UserName", new
+ {
+ FirstName = "Robert",
+ LastName = "Carter"
+ }
+ }
+ }
+ }, patchIfMissing: null));
+`}
+
+
+
+### Increment value
+
+
+
+
+{`// increment UnitsInStock property value by 10
+session.Advanced.Increment("products/1-A", x => x.UnitsInStock, 10);
+
+session.SaveChanges();
+`}
+
+
+
+
+{`session.Advanced.Defer(new PatchCommandData(
+ id: "products/1-A",
+ changeVector: null,
+ patch: new PatchRequest
+ {
+ Script = @"this.UnitsInStock += args.UnitsToAdd;",
+ Values =
+ {
+ {"UnitsToAdd", 10}
+ }
+ },
+ patchIfMissing: null));
+
+session.SaveChanges();
+`}
+
+
+
+
+{`store.Operations.Send(new PatchOperation(
+ id: "products/1-A",
+ changeVector: null,
+ patch: new PatchRequest
+ {
+ Script = @"this.UnitsInStock += args.UnitsToAdd;",
+ Values =
+ {
+ {"UnitsToAdd", 10}
+ }
+ },
+ patchIfMissing: null));
+`}
+
+
+
+### Add or increment
+
+`AddOrIncrement` increments an existing field or adds a new one in documents where they didn't exist.
+
+
+
+{`// While running AddOrIncrement specify
+session.Advanced.AddOrIncrement(
+
+ // Specify document id and entity on which the operation should be performed.
+ id,
+ new User
+ \{
+ FirstName = "John",
+ LastName = "Doe",
+ LoginCount = 1
+
+ // The path to the field and value to be added.
+ \}, x => x.LoginCount, 1);
+
+ session.SaveChanges();
+`}
+
+
+### Add or patch
+
+`AddOrPatch` adds or edits field(s) in a single document.
+
+If the document doesn't yet exist, this operation adds the document but doesn't patch it.
+
+
+
+{`// While running AddOrPatch specify
+session.Advanced.AddOrPatch(
+
+// Specify document id and entity on which the operation should be performed.
+ id,
+ new User
+ \{
+ FirstName = "John",
+ LastName = "Doe",
+ LastLogin = DateTime.Now
+ \},
+ // The path to the field and value to set.
+ x => x.LastLogin, new DateTime(2021, 9, 12));
+
+session.SaveChanges();
+`}
+
+
+### Add or patch to an existing array
+
+This sample shows how to patch an existing array or add it to documents where it doesn't yet exist.
+
+
+
+{`// While running AddOrPatch specify
+session.Advanced.AddOrPatch(
+
+ // Specify document id and entity on which the operation should be performed.
+ id,
+ new User
+ \{
+ FirstName = "John",
+ LastName = "Doe",
+ LoginTimes =
+ new List
+ \{
+ DateTime.UtcNow
+ \}
+ \},
+ // The path to the field
+ x => x.LoginTimes,
+ // Modifies the array
+ u => u.Add(new DateTime(1993, 09, 12), new DateTime(2000, 01, 01)));
+
+session.SaveChanges();
+`}
+
+
+### Add item to array
+
+
+
+
+{`// add a new comment to Comments
+session.Advanced.Patch("blogposts/1",
+ x => x.Comments,
+ comments => comments.Add(new BlogComment
+ {
+ Content = "Lore ipsum",
+ Title = "Some title"
+ }));
+
+session.SaveChanges();
+`}
+
+
+
+
+{`// add a new comment to Comments
+session.Advanced.Defer(new PatchCommandData(
+ id: "blogposts/1",
+ changeVector: null,
+ patch: new PatchRequest
+ {
+ Script = "this.Comments.push(args.Comment);",
+ Values =
+ {
+ {
+ "Comment", new BlogComment
+ {
+ Content = "Lore ipsum",
+ Title = "Some title"
+ }
+ }
+ }
+
+ },
+ patchIfMissing: null));
+
+session.SaveChanges();
+`}
+
+
+
+
+{`// add a new comment to Comments
+store.Operations.Send(new PatchOperation(
+ id: "blogposts/1",
+ changeVector: null,
+ patch: new PatchRequest
+ {
+ Script = "this.Comments.push(args.Comment);",
+ Values =
+ {
+ {
+ "Comment", new BlogComment
+ {
+ Content = "Lore ipsum",
+ Title = "Some title"
+ }
+ }
+ }
+
+ },
+ patchIfMissing: null));
+`}
+
+
+
+### Insert item into specific position in array
+
+Inserting item into specific position is supported only by the non-typed APIs.
+
+
+
+
+{`// insert a new comment at position 1 to Comments
+session.Advanced.Defer(new PatchCommandData(
+ id: "blogposts/1",
+ changeVector: null,
+ patch: new PatchRequest
+ {
+ Script = "this.Comments.splice(1, 0, args.Comment);",
+ Values =
+ {
+ {
+ "Comment", new BlogComment
+ {
+ Content = "Lore ipsum",
+ Title = "Some title"
+ }
+ }
+ }
+ },
+ patchIfMissing: null));
+
+session.SaveChanges();
+`}
+
+
+
+
+{`store.Operations.Send(new PatchOperation(
+ id: "blogposts/1",
+ changeVector: null,
+ patch: new PatchRequest
+ {
+ Script = "this.Comments.splice(1, 0, args.Comment);",
+ Values =
+ {
+ {
+ "Comment", new BlogComment
+ {
+ Content = "Lore ipsum",
+ Title = "Some title"
+ }
+ }
+ }
+ },
+ patchIfMissing: null));
+`}
+
+
+
+### Modify item in specific position in array
+
+Inserting item into specific position is supported only by the non-typed APIs.
+
+
+
+
+{`// modify a comment at position 3 in Comments
+session.Advanced.Defer(new PatchCommandData(
+ id: "blogposts/1",
+ changeVector: null,
+ patch: new PatchRequest
+ {
+ Script = "this.Comments.splice(3, 1, args.Comment);",
+ Values =
+ {
+ {
+ "Comment", new BlogComment
+ {
+ Content = "Lore ipsum",
+ Title = "Some title"
+ }
+ }
+ }
+ },
+ patchIfMissing: null));
+
+session.SaveChanges();
+`}
+
+
+
+
+{`// modify a comment at position 3 in Comments
+store.Operations.Send(new PatchOperation(
+ id: "blogposts/1",
+ changeVector: null,
+ patch: new PatchRequest
+ {
+ Script = "this.Comments.splice(3, 1, args.Comment);",
+ Values =
+ {
+ {
+ "Comment", new BlogComment
+ {
+ Content = "Lore ipsum",
+ Title = "Some title"
+ }
+ }
+ }
+ },
+ patchIfMissing: null));
+`}
+
+
+
+### Remove items from array
+
+
+
+
+{`// filter out all comments of a blogpost which contains the word "wrong" in their contents
+session.Advanced.Patch("blogposts/1",
+ x => x.Comments,
+ comments => comments.RemoveAll(y => y.Content.Contains("wrong")));
+
+session.SaveChanges();
+`}
+
+
+
+
+{`// filter out all comments of a blogpost which contains the word "wrong" in their contents
+session.Advanced.Defer(new PatchCommandData(
+ id: "blogposts/1",
+ changeVector: null,
+ patch: new PatchRequest
+ {
+ Script = @"this.Comments = this.Comments.filter(comment=>
+ !comment.Content.includes(args.Text));",
+ Values =
+ {
+ {"Text", "wrong"}
+ }
+ },
+ patchIfMissing: null));
+
+session.SaveChanges();
+`}
+
+
+
+
+{`// filter out all comments of a blogpost which contains the word "wrong" in their contents
+store.Operations.Send(new PatchOperation(
+ id: "blogposts/1",
+ changeVector: null,
+ patch: new PatchRequest
+ {
+ Script = @"this.Comments = this.Comments.filter(comment=>
+ !comment.Content.includes(args.Text));",
+ Values =
+ {
+ {"Text", "wrong"}
+ }
+ },
+ patchIfMissing: null));
+`}
+
+
+
+### Loading documents in a script
+
+Loading documents is supported only by the non-typed APIs.
+
+
+
+
+{`// update product names in order, according to loaded product documents
+session.Advanced.Defer(new PatchCommandData(
+ id: "orders/1",
+ changeVector: null,
+ patch: new PatchRequest
+ {
+ Script = @"this.Lines.forEach(line=> {
+ var productDoc = load(line.Product);
+ line.ProductName = productDoc.Name;
+ });"
+ }, patchIfMissing: null));
+
+session.SaveChanges();
+`}
+
+
+
+
+{`// update product names in order, according to loaded product documents
+store.Operations.Send(new PatchOperation(
+ id: "blogposts/1",
+ changeVector: null,
+ patch: new PatchRequest
+ {
+ Script = @"this.Lines.forEach(line=> {
+ var productDoc = load(line.Product);
+ line.ProductName = productDoc.Name;
+ });"
+ },
+ patchIfMissing: null));
+`}
+
+
+
+### Remove property
+
+Removing property supported only by the non-typed APIs.
+
+
+
+
+{`// remove property Age
+session.Advanced.Defer(new PatchCommandData(
+ id: "employees/1",
+ changeVector: null,
+ patch: new PatchRequest
+ {
+ Script = @"delete this.Age;"
+ },
+ patchIfMissing: null));
+session.SaveChanges();
+`}
+
+
+
+
+{`// remove property Age
+store.Operations.Send(new PatchOperation(
+ id: "employees/1",
+ changeVector: null,
+ patch: new PatchRequest
+ {
+ Script = @"delete this.Age;"
+ },
+ patchIfMissing: null));
+`}
+
+
+
+### Rename property
+
+Renaming property supported only by the non-typed APIs.
+
+
+
+
+{`// rename FirstName to Name
+session.Advanced.Defer(new PatchCommandData(
+ id: "employees/1",
+ changeVector: null,
+ patch: new PatchRequest
+ {
+ Script = @"var firstName = this[args.Rename.Old];
+ delete this[args.Rename.Old];
+ this[args.Rename.New] = firstName;",
+ Values =
+ {
+ {
+ "Rename", new
+ {
+ Old = "FirstName",
+ New = "Name"
+ }
+ }
+ }
+ },
+ patchIfMissing: null));
+
+session.SaveChanges();
+`}
+
+
+
+
+{`store.Operations.Send(new PatchOperation(
+ id: "employees/1",
+ changeVector: null,
+ patch: new PatchRequest
+ {
+ Script = @"var firstName = this[args.Rename.Old];
+ delete this[args.Rename.Old];
+ this[args.Rename.New] = firstName;",
+ Values =
+ {
+ {
+ "Rename", new
+ {
+ Old = "FirstName",
+ New = "Name"
+ }
+ }
+ }
+ },
+ patchIfMissing: null));
+`}
+
+
+
+### Add document
+
+Adding a new document is supported only by the non-typed APIs.
+
+
+
+
+{`session.Advanced.Defer(new PatchCommandData("employees/1-A", null,
+ new PatchRequest
+ {
+ Script = "put('orders/', { Employee: id(this) });",
+ }, null));
+
+session.SaveChanges();
+`}
+
+
+
+
+{`store.Operations.Send(new PatchOperation("employees/1-A", null, new PatchRequest
+{
+ Script = "put('orders/', { Employee: id(this) });",
+}));
+`}
+
+
+
+### Clone document
+
+To clone a document via patching, use the `put` method within the patching script as follows:
+
+
+
+
+{`session.Advanced.Defer(new PatchCommandData("employees/1-A", null,
+ new PatchRequest
+ {
+ Script = "put('employees/', this);",
+ }, null));
+
+session.SaveChanges();
+`}
+
+
+
+
+{`store.Operations.Send(new PatchOperation("employees/1-A", null, new PatchRequest
+{
+ Script = "put('employees/', this);",
+}));
+`}
+
+
+
+
+
+
+**Attachments, Counters, Time Series, and Revisions:**
+
+ * When cloning a document via patching, only the document's fields are copied to the new document.
+ Attachments, counters, time series data, and revisions from the source document will Not be copied automatically.
+ * To manage time series & counters via patching, you can use the pre-defined JavaScript methods listed here:
+ [Counters methods](../../../server/kb/javascript-engine.mdx#counter-operations) & [Time series methods](../../../server/kb/javascript-engine.mdx#time-series-operations).
+ * Note: When [Cloning a document via the Studio](../../../studio/database/documents/create-new-document.mdx#clone-an-existing-document),
+ attachments, counters, time Series, and revisions will be copied automatically.
+
+**Archived documents:**
+
+ * If the source document is archived, the cloned document will Not be archived.
+ To learn more about archived documents, see [Data archival overview](../../../data-archival/overview.mdx).
+
+
+### Increment counter
+
+In order to increment or create a counter use <code>incrementCounter</code> method as follows:
+
+
+
+
+{`var order = session.Load("orders/1-A");
+session.CountersFor(order).Increment("Likes", 1);
+session.SaveChanges();
+`}
+
+
+
+
+{`session.Advanced.Defer(new PatchCommandData("orders/1-A", null,
+ new PatchRequest
+ {
+ Script = "incrementCounter(this, args.name, args.val);",
+ Values =
+ {
+ { "name", "Likes" },
+ { "val", 20 }
+ }
+ }, null));
+session.SaveChanges();
+`}
+
+
+
+
+{`store.Operations.Send(new PatchOperation("orders/1-A", null, new PatchRequest
+{
+ Script = "incrementCounter(this, args.name, args.val);",
+ Values =
+ {
+ { "name", "Likes" },
+ { "val", -1 }
+ }
+}));
+`}
+
+
+
+
+
+
+The method can be called by document ID or by document reference and the value can be negative.
+
+
+### Delete counter
+
+In order to delete a counter use <code>deleteCounter</code> method as follows:
+
+
+
+
+{`session.CountersFor("orders/1-A").Delete("Likes");
+session.SaveChanges();
+`}
+
+
+
+
+{`session.Advanced.Defer(new PatchCommandData("products/1-A", null,
+ new PatchRequest
+ {
+ Script = "deleteCounter(this, args.name);",
+ Values =
+ {
+ { "name", "Likes" },
+ }
+ }, null));
+session.SaveChanges();
+`}
+
+
+
+
+{`store.Operations.Send(new PatchOperation("products/1-A", null, new PatchRequest
+{
+ Script = "deleteCounter(this, args.name);",
+ Values =
+ {
+ { "name", "Likes" },
+ }
+}));
+`}
+
+
+
+
+
+
+The method can be called by document ID or by document reference
+
+
+### Get counter
+
+In order to get a counter while patching use <code>counter</code> method as follows:
+
+
+
+
+{`var order = session.Load("orders/1-A");
+var counters = session.Advanced.GetCountersFor(order);
+`}
+
+
+
+
+{`session.Advanced.Defer(new PatchCommandData("orders/1-A", null,
+ new PatchRequest
+ {
+ Script = @"var likes = counter(this.Company, args.name);
+ put('result/', {company: this.Company, likes: likes});",
+ Values =
+ {
+ { "name", "Likes" },
+ }
+ }, null));
+session.SaveChanges();
+`}
+
+
+
+
+{`store.Operations.Send(new PatchOperation("orders/1-A", null, new PatchRequest
+{
+ Script = @"var likes = counter(this.Company, args.name);
+ put('result/', {company: this.Company, likes: likes});",
+ Values =
+ {
+ { "name", "Likes" },
+ }
+}));
+`}
+
+
+
+
+
+
+The method can be called by document ID or by document reference.
+
+
+### Patching using inline string compilation
+
+* When using a JavaScript script with the _defer_ or _operations_ syntax,
+ you can apply logic using **inline string compilation**.
+
+* To enable this, set the [Patching.AllowStringCompilation](../../../server/configuration/patching-configuration.mdx#patchingallowstringcompilation) configuration key to _true_.
+
+
+
+
+{`// Modify value using inline string compilation
+// ============================================
+
+session.Advanced.Defer(new PatchCommandData(
+ id: "products/1-A",
+ changeVector: null,
+ patch: new PatchRequest
+ {
+ Script = @"
+ // Give a discount if the product is low in stock:
+ const functionBody = 'return doc.UnitsInStock < lowStock ? ' +
+ 'doc.PricePerUnit * discount :' +
+ 'doc.PricePerUnit;';
+
+ // Define a function that processes the document and returns the price:
+ const calcPrice = new Function('doc', 'lowStock', 'discount', functionBody);
+
+ // Update the product's PricePerUnit based on the function:
+ this.PricePerUnit = calcPrice(this, args.LowStock, args.Discount);",
+
+ Values = {
+ {"LowStock", "10"},
+ {"Discount", "0.8"}
+ }
+ },
+ patchIfMissing: null));
+
+session.SaveChanges();
+
+// The same can be applied using the 'operations' syntax.
+`}
+
+
+
+
+{`// Modify value using inline string compilation
+// ============================================
+
+store.Operations.Send(new PatchOperation("products/1-A", null, new PatchRequest
+{
+ Script = @"
+ // Give a discount if the product is low in stock:
+ const discountExpression = 'this.UnitsInStock < args.LowStock ? ' +
+ 'this.PricePerUnit * args.Discount :' +
+ 'this.PricePerUnit';
+
+ // Call 'eval', pass the string expression that contains your logic:
+ const price = eval(discountExpression);
+
+ // Update the product's PricePerUnit:
+ this.PricePerUnit = price;",
+
+ Values = {
+ {"LowStock", "10"},
+ {"Discount", "0.8"}
+ }
+}));
+
+// The same can be applied using the 'session defer' syntax.
+`}
+
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/patching/_single-document-java.mdx b/versioned_docs/version-7.1/client-api/operations/patching/_single-document-java.mdx
new file mode 100644
index 0000000000..896b74154d
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/patching/_single-document-java.mdx
@@ -0,0 +1,784 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+The **Patch** operation is used to perform partial document updates without having to load, modify, and save a full document.
+The whole operation is executed on the server side and is useful as a performance enhancement or for updating denormalized data in entities.
+
+The current page deals with patch operations on single documents.
+
+Patching has three possible interfaces: [Session API](../../../client-api/operations/patching/single-document.mdx#session-api), [Session API using defer](../../../client-api/operations/patching/single-document.mdx#session-api-using-defer), and [Operations API](../../../client-api/operations/patching/single-document.mdx#operations-api).
+
+Patching can be done from the client as well as in the studio.
+
+In this page:
+[API overview](../../../client-api/operations/patching/single-document.mdx#api-overview)
+[Examples](../../../client-api/operations/patching/single-document.mdx#examples)
+
+
+## API overview
+
+## Session API
+
+A session interface that allows performing the most common patch operations.
+The patch request will be sent to server only when calling `saveChanges`, this way it's possible to perform multiple operations in one request to the server.
+
+### Increment Field Value
+`session.advanced().increment`
+
+
+{` void increment(String id, String path, U valueToAdd);
+
+ void increment(T entity, String path, U valueToAdd);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **T** | `Class` | Entity class |
+| **U** | `Class` | Field class, must be of numeric type, or a `String` of `char` for string concatenation |
+| **entity** | `T` | Entity on which the operation should be performed. The entity should be one that was returned by the current session in a `load` or `query` operation, this way, the session can track down the entity's ID |
+| **entity id** | `String` | Entity ID on which the operation should be performed. |
+| **delta** | `U` | Value to be added. |
+
+* Note how numbers are handled with the [JavaScript engine](../../../server/kb/numbers-in-ravendb.mdx) in RavenDB.
+
+### Set Field Value
+`session.advanced().patch`
+
+
+{` void patch(String id, String path, U value);
+
+ void patch(T entity, String path, U value);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **T** | `Class` | Entity Class |
+| **U** | `Class` | Field class |
+| **entity** | `T` | Entity on which the operation should be performed. The entity should be one that was returned by the current session in a `load` or `query` operation, this way, the session can track down the entity's ID |
+| **entity id** | `String` | Entity ID on which the operation should be performed. |
+| **delta** | `U` | Value to set. |
+
+### Array Manipulation
+`session.advanced().patch`
+
+
+{` void patch(T entity, String pathToArray, Consumer> arrayAdder);
+
+ void patch(String id, String pathToArray, Consumer> arrayAdder);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **T** | `Class` | Entity class |
+| **U** | `Class` | Field class |
+| **entity** | `T` | Entity on which the operation should be performed. The entity should be one that was returned by the current session in a `Load` or `Query` operation, this way, the session can track down the entity's ID |
+| **entity id** | `String` | Entity ID on which the operation should be performed. |
+| **arrayAdder** | `Consumer>` | Lambda that modifies the array, see `JavaScriptArray` below. |
+
+
+`JavaScriptArray` allows building lambdas representing array manipulations for patches.
+
+| Method Signature| Return Type | Description |
+|--------|:-----|-------------|
+| **put(T item)** | `JavaScriptArray` | Allows adding `item` to an array. |
+| **put(T... items)** | `JavaScriptArray` | Items to be added to the array. |
+| **put(Collection<T> items)** | `JavaScriptArray` | Items to be added to the array. |
+| **removeAt(int index)** | `JavaScriptArray` | Removes item in position `index` in array. |
+
+
+
+
+
+## Session API using defer
+The low level session api for patches uses the `session.advanced().defer` function that allows registering single or several commands.
+One of the possible commands is the `PatchCommandData`, describing single document patch command.
+The patch request will be sent to server only when calling `saveChanges`, this way it's possible to perform multiple operations in one request to the server.
+
+`session.advanced().defer`
+
+
+{`void defer(ICommandData[] commands);
+`}
+
+
+
+
+
+| Constructor| | |
+|--------|:-----|-------------|
+| **id** | `String` | ID of the document to be patched. |
+| **changeVector** | `String` | [Can be null] Change vector of the document to be patched, used to verify that the document was not changed before the patch reached it. |
+| **patch** | `PatchRequest` | Patch request to be performed on the document. |
+| **patchIfMissing** | `PatchRequest` | [Can be null] Patch request to be performed if no document with the given ID was found. |
+
+
+
+
+
+We highly recommend using scripts with parameters. This allows RavenDB to cache scripts and boost performance. Parameters can be accessed in the script through the "args" object, and passed using PatchRequest's "Values" parameter.
+
+| Members | | |
+| ------------- | ------------- | ----- |
+| **Script** | `String` | JavaScript code to be run. |
+| **Values** | `Map` | Parameters to be passed to the script. The parameters can be accessed using the '$' prefix. Parameter starting with a '$' will be used as is, without further concatenation . |
+
+
+
+
+
+
+## Operations API
+An operations interface that exposes the full functionality and allows performing ad-hoc patch operations without creating a session.
+
+
+
+{`PatchStatus send(PatchOperation operation);
+
+PatchStatus send(PatchOperation operation, SessionInfo sessionInfo);
+
+ PatchOperation.Result send(Class entityClass, PatchOperation operation);
+
+ PatchOperation.Result send(Class entityClass, PatchOperation operation, SessionInfo sessionInfo);
+`}
+
+
+
+
+
+| Constructor| | |
+|--------|:-----|-------------|
+| **id** | `String` | ID of the document to be patched. |
+| **changeVector** | `String` | [Can be null] Change vector of the document to be patched, used to verify that the document was not changed before the patch reached it. |
+| **patch** | `PatchRequest` | Patch request to be performed on the document. |
+| **patchIfMissing** | `PatchRequest` | [Can be null] Patch request to be performed if no document with the given ID was found. Will run only if no `changeVector` was passed. |
+| **skipPatchIfChangeVectorMismatch** | `boolean` | If false and `changeVector` has value, and document with that ID and change vector was not found, will throw exception. |
+
+
+
+
+
+## List of Script Methods
+
+This is a list of a few of the javascript methods that can be used in patch scripts. See the
+more comprehensive list at [Knowledge Base: JavaScript Engine](../../../server/kb/javascript-engine.mdx#predefined-javascript-functions).
+
+| Method | Arguments | Description |
+| - | - | - |
+| **load** | `string` or `string[]` | Loads one or more documents into the context of the script by their document IDs |
+| **loadPath** | A document and a path to an ID within that document | Loads a related document by the path to its ID |
+| **del** | Document ID; change vector | Delete the given document by its ID. If you add the expected change vector and the document's current change vector does not match, the document will _not_ be deleted. |
+| **put** | Document ID; document; change vector | Create or overwrite a document with a specified ID and entity. If you try to overwrite an existing document and pass the expected change vector, the put will fail if the specified change vector does not match the document's current change vector. |
+| **cmpxchg** | Key | Load a compare exchange value into the context of the script using its key |
+| **getMetadata** | Document | Returns the document's metadata |
+| **id** | Document | Returns the document's ID |
+| **lastModified** | Document | Returns the `DateTime` of the most recent modification made to the given document |
+| **counter** | Document; counter name | Returns the value of the specified counter in the specified document |
+| **counterRaw** | Document; counter name | Returns the specified counter in the specified document as a key-value pair |
+| **incrementCounter** | Document; counter name | Increases the value of the counter by one |
+| **deleteCounter** | Document; counter name | Deletes the counter |
+
+
+
+## Examples
+
+### Change Field's Value
+
+
+
+
+{`// change firstName to Robert
+session
+ .advanced()
+ .patch("employees/1", "FirstName", "Robert");
+`}
+
+
+
+
+{`// change firstName to Robert
+PatchRequest patchRequest = new PatchRequest();
+patchRequest.setScript("this.FirstName = args.firstName");
+patchRequest.setValues(Collections.singletonMap("firstName", "Robert"));
+PatchCommandData patchCommandData = new PatchCommandData("employees/1", null, patchRequest, null);
+session.advanced().defer(patchCommandData);
+
+session.saveChanges();
+`}
+
+
+
+
+{`// change firstName to Robert
+PatchRequest patchRequest = new PatchRequest();
+patchRequest.setScript("this.FirstName = args.firstName;");
+patchRequest.setValues(Collections.singletonMap("firstName", "Robert"));
+PatchOperation patchOperation = new PatchOperation("employees/1", null, patchRequest);
+store.operations().send(patchOperation);
+`}
+
+
+
+
+### Change Values of Two Fields
+
+
+
+
+{`// Modify FirstName to Robert and LastName to Carter in single request
+// ===================================================================
+
+// The two Patch operations below are sent via 'saveChanges()' which complete transactionally,
+// as this call generates a single HTTP request to the database.
+// Either both will succeed or both will be rolled back since they are applied within the same transaction.
+// However, on the server side, the two Patch operations are still executed separately.
+// To achieve atomicity at the level of a single server-side operation, use 'defer' or the operations syntax.
+
+session.advanced().patch("employees/1", "FirstName", "Robert");
+session.advanced().patch("employees/1", "LastName", "Carter");
+
+session.saveChanges();
+`}
+
+
+
+
+{`// Change firstName to Robert and lastName to Carter in single request
+// Note that here we do maintain the atomicity of the operation
+PatchRequest patchRequest = new PatchRequest();
+patchRequest.setScript("this.FirstName = args.firstName;" +
+ "this.LastName = args.lastName");
+
+Map values = new HashMap<>();
+values.put("firstName", "Robert");
+values.put("lastName", "Carter");
+patchRequest.setValues(values);
+
+session.advanced().defer(new PatchCommandData("employees/1", null, patchRequest, null));
+session.saveChanges();
+`}
+
+
+
+
+{`// Change FirstName to Robert and LastName to Carter in single request
+// Note that here we do maintain the atomicity of the operation
+PatchRequest patchRequest = new PatchRequest();
+patchRequest.setScript("this.FirstName = args.firstName; " +
+ "this.LastName = args.lastName");
+
+Map values = new HashMap<>();
+values.put("firstName", "Robert");
+values.put("lastName", "Carter");
+patchRequest.setValues(values);
+
+store.operations().send(new PatchOperation("employees/1", null, patchRequest));
+`}
+
+
+
+
+### Increment Value
+
+
+
+
+{`// increment UnitsInStock property value by 10
+session.advanced().increment("products/1-A", "UnitsInStock", 10);
+
+session.saveChanges();
+`}
+
+
+
+
+{`PatchRequest request = new PatchRequest();
+request.setScript("this.UnitsInStock += args.unitsToAdd");
+request.setValues(Collections.singletonMap("unitsToAdd", 10));
+
+session.advanced().defer(
+ new PatchCommandData("products/1-A", null, request, null));
+session.saveChanges();
+`}
+
+
+
+
+{`PatchRequest request = new PatchRequest();
+request.setScript("this.UnitsInStock += args.unitsToAdd");
+request.setValues(Collections.singletonMap("unitsToAdd", 10));
+store.operations().send(new PatchOperation("products/1-A", null, request));
+`}
+
+
+
+
+### Add Item to Array
+
+
+
+
+{`BlogComment comment = new BlogComment();
+comment.setContent("Lore ipsum");
+comment.setTitle("Some title");
+
+session.advanced()
+ .patch("blogposts/1", "comments", comments -> comments.add(comment));
+
+session.saveChanges();
+`}
+
+
+
+
+{`// add a new comment to comments
+BlogComment comment = new BlogComment();
+comment.setContent("Lore ipsum");
+comment.setTitle("Some title");
+
+PatchRequest patchRequest = new PatchRequest();
+patchRequest.setScript("this.comments.push(args.comment");
+patchRequest.setValues(Collections.singletonMap("comment", comment));
+
+session.advanced().defer(new PatchCommandData("blogposts/1", null, patchRequest, null));
+session.saveChanges();
+`}
+
+
+
+
+{`// add a new comment to comments
+BlogComment comment = new BlogComment();
+comment.setContent("Lore ipsum");
+comment.setTitle("Some title");
+
+PatchRequest patchRequest = new PatchRequest();
+patchRequest.setScript("this.comments.push(args.comment");
+patchRequest.setValues(Collections.singletonMap("comment", comment));
+
+store.operations().send(new PatchOperation("blogposts/1", null, patchRequest));
+`}
+
+
+
+
+### Insert Item into Specific Position in Array
+
+Inserting item into specific position is supported only by the non-typed APIs
+
+
+
+{`BlogComment comment = new BlogComment();
+comment.setContent("Lore ipsum");
+comment.setTitle("Some title");
+
+PatchRequest patchRequest = new PatchRequest();
+patchRequest.setScript("this.comments.splice(1, 0, args.comment)");
+patchRequest.setValues(Collections.singletonMap("comment", comment));
+
+session.advanced().defer(new PatchCommandData("blogposts/1", null, patchRequest, null));
+session.saveChanges();
+`}
+
+
+
+
+{`BlogComment comment = new BlogComment();
+comment.setContent("Lore ipsum");
+comment.setTitle("Some title");
+
+PatchRequest patchRequest = new PatchRequest();
+patchRequest.setScript("this.comments.splice(1, 0, args.comment)");
+patchRequest.setValues(Collections.singletonMap("comment", comment));
+
+store.operations().send(new PatchOperation("blogposts/1", null, patchRequest));
+`}
+
+
+
+
+### Modify Item in Specific Position in Array
+
+Inserting item into specific position is supported only by the non-typed APIs
+
+
+
+{`// modify a comment at position 3 in Comments
+BlogComment comment = new BlogComment();
+comment.setContent("Lore ipsum");
+comment.setTitle("Some title");
+
+PatchRequest patchRequest = new PatchRequest();
+patchRequest.setScript("this.comments.splice(3, 1, args.comment)");
+patchRequest.setValues(Collections.singletonMap("comment", comment));
+
+session.advanced().defer(new PatchCommandData("blogposts/1", null, patchRequest, null));
+session.saveChanges();
+`}
+
+
+
+
+{`// modify a comment at position 3 in Comments
+BlogComment comment = new BlogComment();
+comment.setContent("Lore ipsum");
+comment.setTitle("Some title");
+
+PatchRequest patchRequest = new PatchRequest();
+patchRequest.setScript("this.comments.splice(3, 1, args.comment)");
+patchRequest.setValues(Collections.singletonMap("comment", comment));
+
+store.operations().send(new PatchOperation("blogposts/1", null, patchRequest));
+`}
+
+
+
+
+### Remove Items from Array
+
+Filtering items from an array supported only by the non-typed APIs
+
+
+
+{`//filter out all comments of a blogpost which contains the word "wrong" in their contents
+PatchRequest patchRequest = new PatchRequest();
+patchRequest.setScript("this.comments = this.comments.filter(comment " +
+ "=> !comment.content.includes(args.text));");
+patchRequest.setValues(Collections.singletonMap("text", "wrong"));
+
+session.advanced().defer(
+ new PatchCommandData("blogposts/1", null, patchRequest, null));
+session.saveChanges();
+`}
+
+
+
+
+{`// filter out all comments of a blogpost which contains the word "wrong" in their contents
+PatchRequest patchRequest = new PatchRequest();
+patchRequest.setScript("this.comments = this.comments.filter(comment " +
+ "=> !comment.content.includes(args.text));");
+patchRequest.setValues(Collections.singletonMap("text", "wrong"));
+
+store.operations().send(new PatchOperation("blogposts/1", null, patchRequest));
+`}
+
+
+
+
+### Loading Documents in a Script
+
+Loading documents supported only by non-typed APIs
+
+
+
+{`// update product names in order, according to loaded product documents
+PatchRequest patchRequest = new PatchRequest();
+patchRequest.setScript("this.Lines.forEach(line => {" +
+ " var productDoc = load(line.Product);" +
+ " line.ProductName = productDoc.Name;" +
+ "});");
+
+session.advanced().defer(
+ new PatchCommandData("orders/1", null, patchRequest, null));
+session.saveChanges();
+`}
+
+
+
+
+{`// update product names in order, according to loaded product documents
+PatchRequest patchRequest = new PatchRequest();
+patchRequest.setScript("this.Lines.forEach(line => {" +
+ " var productDoc = load(line.Product);" +
+ " line.ProductName = productDoc.Name;" +
+ "});");
+
+store.operations().send(new PatchOperation("blogposts/1", null, patchRequest));
+`}
+
+
+
+
+### Remove Property
+
+Removing property supported only by the non-typed APIs
+
+
+
+{`// rename FirstName to Name
+
+Map value = new HashMap<>();
+value.put("old", "FirstName");
+value.put("new", "Name");
+
+PatchRequest patchRequest = new PatchRequest();
+patchRequest.setScript("var firstName = this[args.rename.old];" +
+ "delete this[args.rename.old];" +
+ "this[args.rename.new] = firstName;");
+patchRequest.setValues(Collections.singletonMap("rename", value));
+
+session.advanced().defer(new PatchCommandData("employees/1", null, patchRequest, null));
+
+session.saveChanges();
+`}
+
+
+
+
+{`Map value = new HashMap<>();
+value.put("old", "FirstName");
+value.put("new", "Name");
+
+PatchRequest patchRequest = new PatchRequest();
+patchRequest.setScript("var firstName = this[args.rename.old];" +
+ "delete this[args.rename.old];" +
+ "this[args.rename.new] = firstName;");
+patchRequest.setValues(Collections.singletonMap("rename", value));
+
+store.operations().send(new PatchOperation("employees/1", null, patchRequest));
+`}
+
+
+
+
+### Rename Property
+
+Renaming property supported only by the non-typed APIs
+
+
+
+{`// rename FirstName to Name
+
+Map value = new HashMap<>();
+value.put("old", "FirstName");
+value.put("new", "Name");
+
+PatchRequest patchRequest = new PatchRequest();
+patchRequest.setScript("var firstName = this[args.rename.old];" +
+ "delete this[args.rename.old];" +
+ "this[args.rename.new] = firstName;");
+patchRequest.setValues(Collections.singletonMap("rename", value));
+
+session.advanced().defer(new PatchCommandData("employees/1", null, patchRequest, null));
+
+session.saveChanges();
+`}
+
+
+
+
+{`Map value = new HashMap<>();
+value.put("old", "FirstName");
+value.put("new", "Name");
+
+PatchRequest patchRequest = new PatchRequest();
+patchRequest.setScript("var firstName = this[args.rename.old];" +
+ "delete this[args.rename.old];" +
+ "this[args.rename.new] = firstName;");
+patchRequest.setValues(Collections.singletonMap("rename", value));
+
+store.operations().send(new PatchOperation("employees/1", null, patchRequest));
+`}
+
+
+
+
+### Add Document
+
+Adding a new document is supported only by the non-typed APIs
+
+
+
+{`PatchRequest patchRequest = new PatchRequest();
+patchRequest.setScript("put('orders/', { Employee: id(this) });");
+PatchCommandData commandData =
+ new PatchCommandData("employees/1-A", null, patchRequest, null);
+session.advanced().defer(commandData);
+session.saveChanges();
+`}
+
+
+
+
+{`PatchRequest patchRequest = new PatchRequest();
+patchRequest.setScript("put('orders/', { Employee: id(this) });");
+
+store.operations().send(new PatchOperation("employees/1-A", null, patchRequest));
+`}
+
+
+
+
+### Clone Document
+
+To clone a document via patching, use the `put` method within the patching script as follows:
+
+
+
+{`PatchRequest patchRequest = new PatchRequest();
+patchRequest.setScript("put('employees/', this);");
+PatchCommandData commandData =
+ new PatchCommandData("employees/1-A", null, patchRequest, null);
+session.advanced().defer(commandData);
+session.saveChanges();
+`}
+
+
+
+
+{`PatchRequest patchRequest = new PatchRequest();
+patchRequest.setScript("put('employees/', this);");
+
+store.operations().send(new PatchOperation("employees/1-A", null, patchRequest));
+`}
+
+
+
+
+
+
+**Attachments, Counters, Time Series, and Revisions:**
+
+ * When cloning a document via patching, only the document's fields are copied to the new document.
+ Attachments, counters, time series data, and revisions from the source document will Not be copied automatically.
+ * To manage time series & counters via patching, you can use the pre-defined JavaScript methods listed here:
+ [Counters methods](../../../server/kb/javascript-engine.mdx#counter-operations) & [Time series methods](../../../server/kb/javascript-engine.mdx#time-series-operations).
+ * Note: When [Cloning a document via the Studio](../../../studio/database/documents/create-new-document.mdx#clone-an-existing-document),
+ attachments, counters, time Series, and revisions will be copied automatically.
+
+**Archived documents:**
+
+ * If the source document is archived, the cloned document will Not be archived.
+
+
+
+### Increment Counter
+
+In order to increment or create a counter use <code>incrementCounter</code> method as follows
+
+
+
+{`HashMap scriptValues = new HashMap<>();
+scriptValues.put("name", "likes");
+scriptValues.put("val", 20);
+
+PatchRequest patchRequest = new PatchRequest();
+patchRequest.setScript("incrementCounter(this, args.name, args.val);");
+patchRequest.setValues(scriptValues);
+
+new PatchCommandData("orders/1-A", null, patchRequest, null);
+session.saveChanges();
+`}
+
+
+
+
+{`HashMap scriptValues = new HashMap<>();
+scriptValues.put("name", "likes");
+scriptValues.put("val", -1);
+
+PatchRequest patchRequest = new PatchRequest();
+patchRequest.setScript("incrementCounter(this, args.name, args.val);");
+patchRequest.setValues(scriptValues);
+
+PatchOperation patchOperation = new PatchOperation("orders/1-A", null, patchRequest);
+store.operations().send(patchOperation);
+`}
+
+
+
+
+
+The method can be called by document ID or by document reference and the value can be negative
+
+
+### Delete Counter
+
+In order to delete a counter use <code>deleteCounter</code> method as follows
+
+
+
+{`HashMap scriptValues = new HashMap<>();
+scriptValues.put("name", "Likes");
+
+PatchRequest patchRequest = new PatchRequest();
+patchRequest.setScript("deleteCounter(this, args.name);");
+patchRequest.setValues(scriptValues);
+
+new PatchCommandData("products/1-A", null, patchRequest, null);
+session.saveChanges();
+`}
+
+
+
+
+{`HashMap scriptValues = new HashMap<>();
+scriptValues.put("name", "Likes");
+
+PatchRequest patchRequest = new PatchRequest();
+patchRequest.setScript("deleteCounter(this, args.name);");
+patchRequest.setValues(scriptValues);
+
+PatchOperation patchOperation = new PatchOperation("products/1-A", null, patchRequest);
+store.operations().send(patchOperation);
+`}
+
+
+
+
+
+The method can be called by document ID or by document reference
+
+
+### Get Counter
+
+In order to get a counter while patching use <code>counter</code> method as follows
+
+
+
+{`HashMap scriptValues = new HashMap<>();
+scriptValues.put("name", "Likes");
+
+PatchRequest patchRequest = new PatchRequest();
+patchRequest.setScript("var likes = counter(this.Company, args.name);\\n" +
+ "put('result/', {company: this.Company, likes: likes});");
+patchRequest.setValues(scriptValues);
+
+new PatchCommandData("orders/1-A", null, patchRequest, null);
+session.saveChanges();
+`}
+
+
+
+
+{`HashMap scriptValues = new HashMap<>();
+scriptValues.put("name", "Likes");
+
+PatchRequest patchRequest = new PatchRequest();
+patchRequest.setScript("var likes = counter(this.Company, args.name);\\n" +
+ "put('result/', {company: this.Company, likes: likes});");
+patchRequest.setValues(scriptValues);
+
+PatchOperation patchOperation = new PatchOperation("orders/1-A", null, patchRequest);
+store.operations().send(patchOperation);
+`}
+
+
+
+
+
+The method can be called by document ID or by document reference
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/patching/_single-document-nodejs.mdx b/versioned_docs/version-7.1/client-api/operations/patching/_single-document-nodejs.mdx
new file mode 100644
index 0000000000..826c859b26
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/patching/_single-document-nodejs.mdx
@@ -0,0 +1,1549 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Patching allows **updating only parts of a document** in a single trip to the server,
+ without having to load, modify, and save the entire document back to the database.
+
+* This is particularly efficient for large documents or when only a small portion of the document needs to be changed,
+ reducing the amount of data transferred over the network.
+
+* The patching operation is executed on the server-side within a [Single write transaction](../../../client-api/faq/transaction-support.mdx).
+
+* This article covers patch operations on single documents from the Client API.
+ To patch multiple documents that match certain criteria see [Set based patching](../../../client-api/operations/patching/set-based.mdx).
+ Patching can also be done from the [Studio](../../../studio/database/documents/patch-view.mdx).
+
+* In this page:
+
+ * [API overview](../../../client-api/operations/patching/single-document.mdx#api-overview)
+
+ * [Examples](../../../client-api/operations/patching/single-document.mdx#examples)
+ * [Modify value of single field](../../../client-api/operations/patching/single-document.mdx#modify-value-of-single-field)
+ * [Modify values of two fields](../../../client-api/operations/patching/single-document.mdx#modify-values-of-two-fields)
+ * [Increment value](../../../client-api/operations/patching/single-document.mdx#increment-value)
+ * [Add or increment](../../../client-api/operations/patching/single-document.mdx#add-or-increment)
+ * [Add or patch](../../../client-api/operations/patching/single-document.mdx#add-or-patch)
+ * [Add item to array](../../../client-api/operations/patching/single-document.mdx#add-item-to-array)
+ * [Add or patch an existing array](../../../client-api/operations/patching/single-document.mdx#add-or-patch-an-existing-array)
+ * [Insert item into specific position in array](../../../client-api/operations/patching/single-document.mdx#insert-item-into-specific-position-in-array)
+ * [Modify item in specific position in array](../../../client-api/operations/patching/single-document.mdx#modify-item-in-specific-position-in-array)
+ * [Remove items from array](../../../client-api/operations/patching/single-document.mdx#remove-items-from-array)
+ * [Load documents in a script](../../../client-api/operations/patching/single-document.mdx#load-documents-in-a-script)
+ * [Remove property](../../../client-api/operations/patching/single-document.mdx#remove-property)
+ * [Rename property](../../../client-api/operations/patching/single-document.mdx#rename-property)
+ * [Add document](../../../client-api/operations/patching/single-document.mdx#add-document)
+ * [Clone document](../../../client-api/operations/patching/single-document.mdx#clone-document)
+ * [Create/Increment counter](../../../client-api/operations/patching/single-document.mdx#createincrement-counter)
+ * [Delete counter](../../../client-api/operations/patching/single-document.mdx#delete-counter)
+ * [Get counter](../../../client-api/operations/patching/single-document.mdx#get-counter)
+ * [Patching using inline string compilation](../../../client-api/operations/patching/single-document.mdx#patching-using-inline-string-compilation)
+
+ * [Syntax](../../../client-api/operations/patching/single-document.mdx#syntax)
+ * [Session API syntax](../../../client-api/operations/patching/single-document.mdx#session-api-syntax)
+ * [Session API using defer syntax](../../../client-api/operations/patching/single-document.mdx#session-api-using-defer-syntax)
+ * [Operations API syntax](../../../client-api/operations/patching/single-document.mdx#operations-api-syntax)
+ * [List of script methods syntax](../../../client-api/operations/patching/single-document.mdx#list-of-script-methods-syntax)
+
+
+## API overview
+
+Patching can be performed using either of the following interfaces (detailed syntax is provided [below](../../../client-api/operations/patching/single-document.mdx#syntax)):
+
+* **Session API**
+* **Session API using defer**
+* **Operations API**
+
+
+#### Session API
+
+* This interface allows performing most common patch operations.
+
+* Multiple patch methods can be defined on the [session](../../../client-api/session/what-is-a-session-and-how-does-it-work.mdx)
+ and are sent to the server for execution in a single batch (along with any other modified documents) only when calling [SaveChanges](../../../client-api/session/saving-changes.mdx).
+
+* This API includes the following patching methods (see examples [below](../../../client-api/operations/patching/single-document.mdx#examples)):
+ * `patch`
+ * `addOrPatch`
+ * `increment`
+ * `addOrIncrement`
+ * `patchArray`
+ * `addOrPatchArray`
+
+
+
+
+#### Session API using defer
+
+* Use `defer` to manipulate the patch request directly without wrapper methods.
+ Define the patch request yourself with a **script** and optional variables.
+
+* The patch request constructs the `PatchCommandData` command,
+ which is then added to the session using the `defer` function.
+
+* Similar to the above Session API,
+ all patch requests done via `defer` are sent to the server for execution only when _saveChanges_ is called.
+
+
+
+
+#### Operations API
+
+* [Operations](../../../client-api/operations/what-are-operations.mdx) allow performing ad-hoc requests directly on the document store **without** creating a session.
+
+* Similar to the above _defer_ usage, define the patch request yourself with a script and optional variables.
+
+* The patch requests constructs the `PatchOperation`, which is sent to the server for execution only when _saveChanges_ is called.
+
+
+
+
+## Examples
+
+
+
+#### Modify value of single field
+
+
+
+{`// Modify FirstName to Robert using the 'patch' method
+// ===================================================
+
+session.advanced.patch("employees/1-A", "FirstName", "Robert");
+await session.saveChanges();
+`}
+
+
+
+
+{`// Modify FirstName to Robert using 'defer' with 'PatchCommandData'
+// ================================================================
+
+const patchRequest = new PatchRequest();
+patchRequest.script = "this.FirstName = args.FirstName;";
+patchRequest.values = { FirstName: "Robert" };
+
+const patchCommand = new PatchCommandData("employees/1-A", null, patchRequest);
+session.advanced.defer(patchCommand);
+
+await session.saveChanges();
+`}
+
+
+
+
+{`// Modify FirstName to Robert via 'PatchOperation' on the documentStore
+// ====================================================================
+
+const patchRequest = new PatchRequest();
+patchRequest.script = "this.FirstName = args.FirstName;";
+patchRequest.values = { FirstName: "Robert" };
+
+const patchOp = new PatchOperation("employees/1-A", null, patchRequest);
+await documentStore.operations.send(patchOp);
+`}
+
+
+
+
+
+
+
+#### Modify values of two fields
+
+
+
+{`// Modify FirstName to Robert and LastName to Carter in single request
+// ===================================================================
+
+// The two Patch operations below are sent via 'saveChanges()' which complete transactionally,
+// as this call generates a single HTTP request to the database.
+// Either both will succeed - or both will be rolled back - since they are applied within the same
+// transaction.
+
+// However, on the server side, the two Patch operations are still executed separately.
+// To achieve atomicity at the level of a single server-side operation, use 'defer' or an 'operation'.
+
+session.advanced.patch("employees/1-A", "FirstName", "Robert");
+session.advanced.patch("employees/1-A", "LastName", "Carter");
+
+await session.saveChanges();
+`}
+
+
+
+
+{`// Modify FirstName to Robert and LastName to Carter in single request
+// ===================================================================
+
+// Note that here we do maintain the operation's atomicity
+const patchRequest = new PatchRequest();
+patchRequest.script = \`this.FirstName = args.FirstName;
+ this.LastName = args.LastName;\`;
+patchRequest.values = {
+ FirstName: "Robert",
+ LastName: "Carter"
+};
+
+const patchCommand = new PatchCommandData("employees/1-A", null, patchRequest);
+session.advanced.defer(patchCommand);
+
+await session.saveChanges();
+`}
+
+
+
+
+{`// Modify FirstName to Robert and LastName to Carter in single request
+// ===================================================================
+
+// Note that here we do maintain the operation's atomicity
+const patchRequest = new PatchRequest();
+patchRequest.script = \`this.FirstName = args.FirstName;
+ this.LastName = args.LastName;\`;
+patchRequest.values = {
+ FirstName: "Robert",
+ LastName: "Carter"
+};
+
+const patchOp = new PatchOperation("employees/1-A", null, patchRequest);
+await documentStore.operations.send(patchOp);
+`}
+
+
+
+
+
+
+
+#### Increment value
+
+
+
+{`// Increment UnitsInStock property value by 10
+// ===========================================
+
+session.advanced.increment("products/1-A", "UnitsInStock", 10);
+await session.saveChanges();
+`}
+
+
+
+
+{`// Increment UnitsInStock property value by 10
+// ===========================================
+
+const patchRequest = new PatchRequest();
+patchRequest.script = "this.UnitsInStock += args.UnitsToAdd;";
+patchRequest.values = {
+ UnitsToAdd: 10
+};
+
+const patchCommand = new PatchCommandData("products/1-A", null, patchRequest);
+session.advanced.defer(patchCommand);
+
+await session.saveChanges();
+`}
+
+
+
+
+{`// Increment UnitsInStock property value by 10
+// ===========================================
+
+const patchRequest = new PatchRequest();
+patchRequest.script = "this.UnitsInStock += args.UnitsToAdd;";
+patchRequest.values = {
+ UnitsToAdd: 10
+};
+
+const patchOp = new PatchOperation("products/1-A", null, patchRequest);
+await documentStore.operations.send(patchOp);
+`}
+
+
+
+
+
+
+
+#### Add or increment
+`addOrIncrement` behavior:
+
+* If document exists + has the specified field =>
+ * A numeric field will be **incremented** by the specified value.
+ * A string field will be **concatenated** with the specified value.
+ * The entity passed is disregarded.
+* If document exists + does Not contain the specified field =>
+ * The field will be **added** to the document with the specified value.
+ * The entity passed is disregarded.
+* If document does Not exist =>
+ * A new document will be **created** from the provided entity.
+ * The value to increment by is disregarded.
+
+
+
+
+{`// An entity that will be used in case the specified document is not found:
+const newUser = new User();
+newUser.firstName = "John";
+newUser.lastName = "Doe";
+newUser.loginCount = 1;
+
+session.advanced.addOrIncrement(
+ // Specify document id on which the operation should be performed
+ "users/1",
+ // Specify an entity,
+ // if the specified document is Not found, a new document will be created from this entity
+ newUser,
+ // The field that should be incremented
+ "loginCount",
+ // Increment the specified field by this value
+ 2);
+
+await session.saveChanges();
+`}
+
+
+
+
+{`class User {
+ constructor(
+ id = null,
+ firstName = "",
+ lastName = "",
+ loginCount = 0,
+ lastLogin = new Date(),
+ loginTimes = []
+ ) {
+ Object.assign(this, {
+ id,
+ firstName,
+ lastName,
+ loginCount,
+ lastLogin,
+ loginTimes
+ });
+ }
+}
+`}
+
+
+
+
+
+
+
+#### Add or patch
+`addOrPatch` behavior:
+
+* If document exists + has the specified field =>
+ * The field will be **patched**, the specified value will replace the existing value.
+ * The entity passed is disregarded.
+* If document exists + does Not contain the specified field =>
+ * The field will be **added** to the document with the specified value.
+ * The entity passed is disregarded.
+* If document does Not exist =>
+ * A new document will be **created** from the provided entity.
+ * The value to patch by is disregarded.
+
+
+
+
+{`// An entity that will be used in case the specified document is not found:
+const newUser = new User();
+newUser.firstName = "John";
+newUser.lastName = "Doe";
+newUser.lastLogin = new Date(2024, 0, 1);
+
+session.advanced.addOrPatch(
+ // Specify document id on which the operation should be performed
+ "users/1",
+ // Specify an entity,
+ // if the specified document is Not found, a new document will be created from this entity
+ newUser,
+ // The field that should be patched
+ "lastLogin",
+ // Set the current date and time as the new value for the specified field
+ new Date());
+
+await session.saveChanges();
+`}
+
+
+
+
+{`class User {
+ constructor(
+ id = null,
+ firstName = "",
+ lastName = "",
+ loginCount = 0,
+ lastLogin = new Date(),
+ loginTimes = []
+ ) {
+ Object.assign(this, {
+ id,
+ firstName,
+ lastName,
+ loginCount,
+ lastLogin,
+ loginTimes
+ });
+ }
+}
+`}
+
+
+
+
+
+
+
+#### Add item to array
+`patchArray` behavior:
+
+* If document exists + has the specified array =>
+ * Item will be **added** to the array.
+* If document exists + does Not contain the specified array field =>
+ * No exception is thrown, no patching is done, a new array is Not created.
+* If document does Not exist =>
+ * No exception is thrown, no patching is done, a new document is Not created.
+
+
+
+
+{`// Add a new comment to an array
+// =============================
+
+// The new comment to add:
+const newBlogComment = new BlogComment();
+newBlogComment.content = "Some content";
+newBlogComment.title = "Some title";
+
+// Call 'patchArray':
+session.advanced.patchArray(
+ "blogPosts/1", // Document id to patch
+ "comments", // The array to add the comment to
+ comments => { // Adding the new comment
+ comments.push(newBlogComment);
+ });
+
+await session.saveChanges();
+`}
+
+
+
+
+{`// Add a new comment to an array
+// =============================
+
+// Define the 'PatchRequest':
+const patchRequest = new PatchRequest();
+patchRequest.script = "this.comments.push(args.comment);";
+patchRequest.values = {
+ comment: {
+ title: "Some title",
+ content: "Some content",
+ }
+};
+
+// Define the 'PatchCommandData':
+const patchCommand = new PatchCommandData("blogPosts/1", null, patchRequest);
+session.advanced.defer(patchCommand);
+
+await session.saveChanges();
+`}
+
+
+
+
+{`// Add a new comment to an array
+// =============================
+
+// Define the 'PatchRequest':
+const patchRequest = new PatchRequest();
+patchRequest.script = "this.comments.push(args.comment);";
+patchRequest.values = {
+ comment: {
+ title: "Some title",
+ content: "Some content",
+ }
+};
+
+// Define and send the 'PatchOperation':
+const patchOp = new PatchOperation("blogPosts/1", null, patchRequest);
+await documentStore.operations.send(patchOp);
+`}
+
+
+
+
+{`class BlogPost {
+ constructor(
+ id = null,
+ title = "",
+ body = "",
+ comments = []
+ ) {
+ Object.assign(this, {
+ id,
+ title,
+ body,
+ comments
+ });
+ }
+}
+
+class BlogComment {
+ constructor(
+ title = "",
+ content = ""
+ ) {
+ Object.assign(this, {
+ title,
+ content
+ });
+ }
+}
+`}
+
+
+
+
+
+
+
+#### Add or patch an existing array
+`addOrPatchArray` behavior:
+
+* If document exists + has the specified array field =>
+ * The specified values will be **added** to the existing array values.
+ * The entity passed is disregarded.
+* If document exists + does Not contain the specified array field =>
+ * The array field is Not added to the document, no patching is done.
+ * The entity passed is disregarded.
+* If document does Not exist =>
+ * A new document will be **created** from the provided entity.
+ * The value to patch by is disregarded.
+
+
+
+
+{`// An entity that will be used in case the specified document is not found:
+const newUser = new User();
+newUser.firstName = "John";
+newUser.lastName = "Doe";
+newUser.loginTimes = [new Date(2024, 0, 1)];
+
+session.advanced.addOrPatchArray(
+ // Specify document id on which the operation should be performed
+ "users/1",
+ // Specify an entity,
+ // if the specified document is Not found, a new document will be created from this entity
+ newUser,
+ // The array field that should be patched
+ "loginTimes",
+ // Add values to the list of the specified array field
+ a => a.push(new Date(2024, 2, 2), new Date(2024, 3, 3)));
+
+await session.saveChanges();
+`}
+
+
+
+
+{`class User {
+ constructor(
+ id = null,
+ firstName = "",
+ lastName = "",
+ loginCount = 0,
+ lastLogin = new Date(),
+ loginTimes = []
+ ) {
+ Object.assign(this, {
+ id,
+ firstName,
+ lastName,
+ loginCount,
+ lastLogin,
+ loginTimes
+ });
+ }
+}
+`}
+
+
+
+
+
+
+
+#### Insert item into specific position in array
+* Inserting an item in a specific position is supported only by the _defer_ or the _operations_ syntax.
+* No exception is thrown if either the document or the specified array does not exist.
+
+
+
+
+{`// Insert a new comment at position 1
+// ==================================
+
+// Define the 'PatchRequest':
+const patchRequest = new PatchRequest();
+patchRequest.script = "this.comments.splice(1, 0, args.comment);";
+patchRequest.values = {
+ comment: {
+ title: "Some title",
+ content: "Some content",
+ }
+};
+
+// Define the 'PatchCommandData':
+const patchCommand = new PatchCommandData("blogPosts/1", null, patchRequest);
+session.advanced.defer(patchCommand);
+
+await session.saveChanges();
+`}
+
+
+
+
+{`// Insert a new comment at position 1
+// ==================================
+
+// Define the 'PatchRequest':
+const patchRequest = new PatchRequest();
+patchRequest.script = "this.comments.splice(1, 0, args.comment);";
+patchRequest.values = {
+ comment: {
+ title: "Some title",
+ content: "Some content",
+ }
+};
+
+// Define and send the 'PatchOperation':
+const patchOp = new PatchOperation("blogPosts/1", null, patchRequest);
+await documentStore.operations.send(patchOp);
+`}
+
+
+
+
+{`class BlogPost {
+ constructor(
+ id = null,
+ title = "",
+ body = "",
+ comments = []
+ ) {
+ Object.assign(this, {
+ id,
+ title,
+ body,
+ comments
+ });
+ }
+}
+
+class BlogComment {
+ constructor(
+ title = "",
+ content = ""
+ ) {
+ Object.assign(this, {
+ title,
+ content
+ });
+ }
+}
+`}
+
+
+
+
+
+
+
+#### Modify item in specific position in array
+* Inserting an item in a specific position is supported only by the _defer_ or the _operations_ syntax.
+* No exception is thrown if either the document or the specified array does not exist.
+
+
+
+
+{`// Modify comment at position 3
+// ============================
+
+// Define the 'PatchRequest':
+const patchRequest = new PatchRequest();
+patchRequest.script = "this.comments.splice(3, 1, args.comment);";
+patchRequest.values = {
+ comment: {
+ title: "Some title",
+ content: "Some content",
+ }
+};
+
+// Define the 'PatchCommandData':
+const patchCommand = new PatchCommandData("blogPosts/1", null, patchRequest);
+session.advanced.defer(patchCommand);
+
+await session.saveChanges();
+`}
+
+
+
+
+{`// Modify comment at position 3
+// ============================
+
+// Define the 'PatchRequest':
+const patchRequest = new PatchRequest();
+patchRequest.script = "this.comments.splice(3, 1, args.comment);";
+patchRequest.values = {
+ comment: {
+ title: "Some title",
+ content: "Some content",
+ }
+};
+
+// Define and send the 'PatchOperation':
+const patchOp = new PatchOperation("blogPosts/1", null, patchRequest);
+await documentStore.operations.send(patchOp);
+`}
+
+
+
+
+{`class BlogPost {
+ constructor(
+ id = null,
+ title = "",
+ body = "",
+ comments = []
+ ) {
+ Object.assign(this, {
+ id,
+ title,
+ body,
+ comments
+ });
+ }
+}
+
+class BlogComment {
+ constructor(
+ title = "",
+ content = ""
+ ) {
+ Object.assign(this, {
+ title,
+ content
+ });
+ }
+}
+`}
+
+
+
+
+
+
+
+#### Remove items from array
+* Removing all items that match some predicate from an array is supported only by the _defer_ or the _operations_ syntax.
+* No exception is thrown if either the document or the specified array does not exist.
+
+
+
+
+{`// Remove all comments that contain the word "wrong" in their content
+// ==================================================================
+
+// Define the 'PatchRequest':
+const patchRequest = new PatchRequest();
+patchRequest.script = \`this.comments = this.comments.filter(comment =>
+ !comment.content.includes(args.text));\`;
+patchRequest.values = {
+ text: "wrong"
+};
+
+// Define the 'PatchCommandData':
+const patchCommand = new PatchCommandData("blogPosts/1", null, patchRequest);
+session.advanced.defer(patchCommand);
+
+await session.saveChanges();
+`}
+
+
+
+
+{`// Remove all comments that contain the word "wrong" in their content
+// ==================================================================
+
+// Define the 'PatchRequest':
+const patchRequest = new PatchRequest();
+patchRequest.script = \`this.comments = this.comments.filter(comment =>
+ !comment.content.includes(args.text));\`;
+patchRequest.values = {
+ text: "wrong"
+};
+
+// Define and send the 'PatchOperation':
+const patchOp = new PatchOperation("blogPosts/1", null, patchRequest);
+await documentStore.operations.send(patchOp);
+`}
+
+
+
+
+{`class BlogPost {
+ constructor(
+ id = null,
+ title = "",
+ body = "",
+ comments = []
+ ) {
+ Object.assign(this, {
+ id,
+ title,
+ body,
+ comments
+ });
+ }
+}
+
+class BlogComment {
+ constructor(
+ title = "",
+ content = ""
+ ) {
+ Object.assign(this, {
+ title,
+ content
+ });
+ }
+}
+`}
+
+
+
+
+
+
+
+#### Load documents in a script
+* Loading documents is supported only by the _defer_ or the _operations_ syntax.
+
+
+
+
+{`// Load a related document and update a field
+// ==========================================
+
+// Define the 'PatchRequest':
+const patchRequest = new PatchRequest();
+patchRequest.script = \`this.Lines.forEach(line => {
+ const productDoc = load(line.Product);
+ line.ProductName = productDoc.Name;
+ });\`;
+
+// Define the 'PatchCommandData':
+const patchCommand = new PatchCommandData("orders/1-A", null, patchRequest);
+session.advanced.defer(patchCommand);
+
+await session.saveChanges();
+`}
+
+
+
+
+{`// Load a related document and update a field
+// ==========================================
+
+// Define the 'PatchRequest':
+const patchRequest = new PatchRequest();
+patchRequest.script = \`this.Lines.forEach(line => {
+ const productDoc = load(line.Product);
+ line.ProductName = productDoc.Name;
+ });\`;
+
+// Define and send the 'PatchOperation':
+const patchOp = new PatchOperation("orders/1-A", null, patchRequest);
+await documentStore.operations.send(patchOp);
+`}
+
+
+
+
+
+
+
+#### Remove property
+* Removing a property is supported only by the _defer_ or the _operations_ syntax.
+
+
+
+
+{`// Remove a document property
+// ==========================
+
+// Define the 'PatchRequest':
+const patchRequest = new PatchRequest();
+patchRequest.script = \`delete this.Address.PostalCode;\`;
+
+// Define the 'PatchCommandData':
+const patchCommand = new PatchCommandData("employees/1-A", null, patchRequest);
+session.advanced.defer(patchCommand);
+
+await session.saveChanges();
+`}
+
+
+
+
+{`// Remove a document property
+// ==========================
+
+// Define the 'PatchRequest':
+const patchRequest = new PatchRequest();
+patchRequest.script = \`delete this.Address.PostalCode;\`;
+
+// Define and send the 'PatchOperation':
+const patchOp = new PatchOperation("employees/1-A", null, patchRequest);
+await documentStore.operations.send(patchOp);
+`}
+
+
+
+
+
+
+
+#### Rename property
+* Renaming a property is supported only by the _defer_ or the _operations_ syntax.
+
+
+
+
+{`// Rename property Name to ProductName
+// ===================================
+
+// Define the 'PatchRequest':
+const patchRequest = new PatchRequest();
+patchRequest.script = \`const propertyValue = this[args.currentProperty];
+ delete this[args.currentProperty];
+ this[args.newProperty] = propertyValue;\`;
+patchRequest.values = {
+ currentProperty: "Name",
+ newProperty: "ProductName"
+};
+
+// Define the 'PatchCommandData':
+const patchCommand = new PatchCommandData("products/1-A", null, patchRequest);
+session.advanced.defer(patchCommand);
+
+await session.saveChanges();
+`}
+
+
+
+
+{`// Rename property Name to ProductName
+// ===================================
+
+// Define the 'PatchRequest':
+const patchRequest = new PatchRequest();
+patchRequest.script = \`const propertyValue = this[args.currentProperty];
+ delete this[args.currentProperty];
+ this[args.newProperty] = propertyValue;\`;
+patchRequest.values = {
+ currentProperty: "Name",
+ newProperty: "ProductName"
+};
+
+// Define and send the 'PatchOperation':
+const patchOp = new PatchOperation("products/1-A", null, patchRequest);
+await documentStore.operations.send(patchOp);
+`}
+
+
+
+
+
+
+
+#### Add document
+* Adding a new document is supported only by the _defer_ or the _operations_ syntax.
+
+
+
+
+{`// Add a new document
+// ==================
+
+// Define the 'PatchRequest':
+const patchRequest = new PatchRequest();
+
+// Add a new document (projects/1) to collection Projects
+// The id of the patched document (employees/1-A) is used as content for ProjectLeader property
+patchRequest.script = \`put('projects/1', {
+ ProjectLeader: id(this),
+ ProjectDesc: 'Some desc..',
+ '@metadata': { '@collection': 'Projects'}
+ });\`;
+
+// Define the 'PatchCommandData':
+const patchCommand = new PatchCommandData("employees/1-A", null, patchRequest);
+session.advanced.defer(patchCommand);
+
+await session.saveChanges();
+`}
+
+
+
+
+{`// Add a new document
+// ==================
+
+// Define the 'PatchRequest':
+const patchRequest = new PatchRequest();
+
+// Add a new document (projects/1) to collection Projects
+// The id of the patched document (employees/1-A) is used as content for ProjectLeader property
+patchRequest.script = \`put('projects/1', {
+ ProjectLeader: id(this),
+ ProjectDesc: 'Some desc..',
+ '@metadata': { '@collection': 'Projects'}
+ });\`;
+
+// Define and send the 'PatchOperation':
+const patchOp = new PatchOperation("employees/1-A", null, patchRequest);
+await documentStore.operations.send(patchOp);
+`}
+
+
+
+
+
+
+
+#### Clone document
+* Cloning a new document is supported only by the _defer_ or the _operations_ syntax.
+
+
+
+
+{`// Clone a document
+// ================
+
+// Define the 'PatchRequest':
+const patchRequest = new PatchRequest();
+
+// The new document will be in the same collection as 'employees/1-A'
+// By specifying 'employees/' the server will generate a "server-side ID' to the new document
+patchRequest.script = \`put('employees/', this);\`;
+
+// Define the 'PatchCommandData':
+const patchCommand = new PatchCommandData("employees/1-A", null, patchRequest);
+session.advanced.defer(patchCommand);
+
+await session.saveChanges();
+`}
+
+
+
+
+{`// Clone a document
+// ================
+
+// Define the 'PatchRequest':
+const patchRequest = new PatchRequest();
+
+// The new document will be in the same collection as 'employees/1-A'
+// By specifying 'employees/' the server will generate a "server-side ID' to the new document
+patchRequest.script = \`put('employees/', this);\`;
+
+// Define and send the 'PatchOperation':
+const patchOp = new PatchOperation("employees/1-A", null, patchRequest);
+await documentStore.operations.send(patchOp);
+`}
+
+
+
+
+
+
+**Attachments, Counters, Time Series, and Revisions:**
+
+ * When cloning a document via patching, only the document's fields are copied to the new document.
+ Attachments, counters, time series data, and revisions from the source document will Not be copied automatically.
+ * To manage time series & counters via patching, you can use the pre-defined JavaScript methods listed here:
+ [Counters methods](../../../server/kb/javascript-engine.mdx#counter-operations) & [Time series methods](../../../server/kb/javascript-engine.mdx#time-series-operations).
+ * Note: When [Cloning a document via the Studio](../../../studio/database/documents/create-new-document.mdx#clone-an-existing-document),
+ attachments, counters, time Series, and revisions will be copied automatically.
+
+**Archived documents:**
+
+ * If the source document is archived, the cloned document will Not be archived.
+
+
+
+
+
+
+#### Create/Increment counter
+
+
+
+{`// Increment/Create counter
+// ========================
+
+// Increase counter "Likes" by 10, or create it with a value of 10 if it doesn't exist
+session.countersFor("products/1-A").increment("Likes", 10);
+await session.saveChanges();
+`}
+
+
+
+
+{`// Create/Increment counter
+// ========================
+
+// Define the 'PatchRequest':
+const patchRequest = new PatchRequest();
+
+// Use the 'incrementCounter' method to create/increment a counter
+patchRequest.script = \`incrementCounter(this, args.counterName, args.counterValue);\`;
+patchRequest.values = {
+ counterName: "Likes",
+ counterValue: 10
+};
+
+// Define the 'PatchCommandData':
+const patchCommand = new PatchCommandData("products/1-A", null, patchRequest);
+session.advanced.defer(patchCommand);
+
+await session.saveChanges();
+`}
+
+
+
+
+{`// Create/Increment counter
+// ========================
+
+// Define the 'PatchRequest':
+const patchRequest = new PatchRequest();
+
+// Use the 'incrementCounter' method to create/increment a counter
+patchRequest.script = \`incrementCounter(this, args.counterName, args.counterValue);\`;
+patchRequest.values = {
+ counterName: "Likes",
+ counterValue: 10
+};
+
+// Define and send the 'PatchOperation':
+const patchOp = new PatchOperation("products/1-A", null, patchRequest);
+await documentStore.operations.send(patchOp);
+`}
+
+
+
+
+
+
+Learn more about Counters in this [Counters Overview](../../../document-extensions/counters/overview.mdx).
+
+
+
+
+
+
+#### Delete counter
+
+
+
+{`// Delete counter
+// ==============
+
+session.countersFor("products/1-A").delete("Likes");
+await session.saveChanges();
+`}
+
+
+
+
+{`// Delete counter
+// ==============
+
+// Define the 'PatchRequest':
+const patchRequest = new PatchRequest();
+
+// Use the 'deleteCounter' method to delete a counter
+patchRequest.script = \`deleteCounter(this, args.counterName);\`;
+patchRequest.values = {
+ counterName: "Likes"
+};
+
+// Define the 'PatchCommandData':
+const patchCommand = new PatchCommandData("products/1-A", null, patchRequest);
+session.advanced.defer(patchCommand);
+
+await session.saveChanges();
+`}
+
+
+
+
+{`// Delete counter
+// ==============
+
+// Define the 'PatchRequest':
+const patchRequest = new PatchRequest();
+
+// Use the 'deleteCounter' method to delete a counter
+patchRequest.script = \`deleteCounter(this, args.counterName);\`;
+patchRequest.values = {
+ counterName: "Likes"
+};
+
+// Define and send the 'PatchOperation':
+const patchOp = new PatchOperation("products/1-A", null, patchRequest);
+await documentStore.operations.send(patchOp);
+`}
+
+
+
+
+
+
+
+#### Get counter
+
+
+
+{`// Get counter value
+// =================
+
+const counters = await session.counterFor("products/1-A").get("Likes");
+`}
+
+
+
+
+{`// Get counter value
+// =================
+
+// Define the 'PatchRequest':
+const patchRequest = new PatchRequest();
+
+// Use the 'counter' method to get the value of the specified counter
+// and then put the results into a new document 'productLikes/'
+patchRequest.script = \`const numberOfLikes = counter(this, args.counterName);
+ put('productLikes/', {ProductName: this.Name, Likes: numberOfLikes});\`;
+
+patchRequest.values = {
+ counterName: "Likes"
+};
+
+// Define the 'PatchCommandData':
+const patchCommand = new PatchCommandData("products/1-A", null, patchRequest);
+session.advanced.defer(patchCommand);
+
+await session.saveChanges();
+`}
+
+
+
+
+{`// Get counter value
+// =================
+
+// Define the 'PatchRequest':
+const patchRequest = new PatchRequest();
+
+// Use the 'counter' method to get the value of the specified counter
+// and then put the results into a new document 'productLikes/'
+patchRequest.script = \`const numberOfLikes = counter(this, args.counterName);
+ put('productLikes/', {ProductName: this.Name, Likes: numberOfLikes});\`;
+
+patchRequest.values = {
+ counterName: "Likes"
+};
+
+// Define and send the 'PatchOperation':
+const patchOp = new PatchOperation("products/1-A", null, patchRequest);
+await documentStore.operations.send(patchOp);
+`}
+
+
+
+
+
+
+
+#### Patching using inline string compilation
+* When using a JavaScript script with the _defer_ or _operations_ syntax,
+ you can apply logic using **inline string compilation**.
+* To enable this, set the [Patching.AllowStringCompilation](../../../server/configuration/patching-configuration.mdx#patchingallowstringcompilation) configuration key to _true_.
+
+
+
+
+{`// Modify value using inline string compilation
+// ============================================
+
+// Define the 'PatchRequest':
+const patchRequest = new PatchRequest();
+
+// Define the script:
+patchRequest.script = \`
+ // Give a discount if the product is low in stock:
+ const functionBody = "return doc.UnitsInStock < lowStock ? " +
+ "doc.PricePerUnit * discount :" +
+ "doc.PricePerUnit;";
+
+ // Define a function that processes the document and returns the price:
+ const calcPrice = new Function("doc", "lowStock", "discount", functionBody);
+
+ // Update the product's PricePerUnit based on the function:
+ this.PricePerUnit = calcPrice(this, args.lowStock, args.discount);\`;
+
+patchRequest.values = {
+ discount: "0.8",
+ lowStock: "10"
+};
+
+// Define the 'PatchCommandData':
+const patchCommand = new PatchCommandData("products/1-A", null, patchRequest);
+session.advanced.defer(patchCommand);
+
+await session.saveChanges();
+
+// The same can be applied using the 'operations' syntax.
+`}
+
+
+
+
+{`// Modify value using inline string compilation
+// ============================================
+
+// Define the 'PatchRequest':
+const patchRequest = new PatchRequest();
+
+// Define the script:
+patchRequest.script = \`
+ // Give a discount if the product is low in stock:
+ const discountExpression = "this.UnitsInStock < args.lowStock ? " +
+ "this.PricePerUnit * args.discount :" +
+ "this.PricePerUnit";
+
+ // Call 'eval', pass the string expression that contains your logic:
+ const price = eval(discountExpression);
+
+ // Update the product's PricePerUnit:
+ this.PricePerUnit = price;\`;
+
+patchRequest.values = {
+ discount: "0.8",
+ lowStock: "10"
+};
+
+// Define and send the 'PatchOperation':
+const patchOp = new PatchOperation("products/1-A", null, patchRequest);
+await documentStore.operations.send(patchOp);
+
+// The same can be applied using the 'session defer' syntax.
+`}
+
+
+
+
+
+
+
+## Syntax
+### Session API syntax
+
+
+
+
+{`patch(id, path, value);
+patch(entity, path, value);
+`}
+
+
+
+| Parameter | Type | Description |
+|--------------|----------|---------------------------------------------------------------------------------------------------------------------------------------------------|
+| **id** | `string` | Document ID on which patching should be performed. |
+| **entity** | `object` | Entity on which patching should be performed. The entity should be one that was returned by the current session in a `load` or `query` operation. |
+| **Path** | `string` | The path to the field. |
+| **value** | `object` | Value to set. |
+
+
+
+{`addOrPatch(id, entity, pathToObject, value);
+`}
+
+
+
+| Parameter | Type | Description |
+|-------------|----------|------------------------------------------------------------------------------------------|
+| **id** | `string` | Document ID on which patching should be performed. |
+| **entity** | `object` | If the specified document is Not found, a new document will be created from this entity. |
+| **Path** | `string` | The path to the field. |
+| **value** | `object` | Value to set. |
+
+
+
+{`increment(id, path, valueToAdd);
+increment(entity, path, valueToAdd);
+`}
+
+
+
+| Parameter | Type | Description |
+|-----------------|----------|---------------------------------------------------------------------------------------------------------------------------------------------------|
+| **id** | `string` | Document ID on which patching should be performed. |
+| **entity** | `object` | Entity on which patching should be performed. The entity should be one that was returned by the current session in a `load` or `query` operation. |
+| **path** | `string` | The path to the field. |
+| **valueToAdd** | `object` | Value to increment by. Note how numbers are handled with the [JavaScript engine](../../../server/kb/numbers-in-ravendb.mdx) in RavenDB. |
+
+
+
+{`addOrIncrement(id, entity, pathToObject, valToAdd);
+`}
+
+
+
+| Parameter | Type | Description |
+|----------------|----------|------------------------------------------------------------------------------------------|
+| **id** | `string` | Document ID on which patching should be performed. |
+| **entity** | `object` | If the specified document is Not found, a new document will be created from this entity. |
+| **path** | `string` | The path to the field. |
+| **valueToAdd** | `object` | Value to increment by. |
+
+
+
+{`patchArray(id, pathToArray, arrayAdder);
+patchArray(entity, pathToArray, arrayAdder);
+`}
+
+
+
+| Parameter | Type | Description |
+|-----------------|-----------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------|
+| **id** | `string` | Document ID on which patching should be performed. |
+| **entity** | `object` | Entity on which patching should be performed. The entity should be one that was returned by the current session in a `load` or `query` operation. |
+| **pathToArray** | `string` | The path to the array field. |
+| **arrayAdder** | `(JavaScriptArray) => void` | Function that modifies the array. |
+
+
+
+{`addOrPatchArray(id, entity, pathToObject, arrayAdder);
+`}
+
+
+
+| Parameter | Type | Description |
+|-----------------|-----------------------------|------------------------------------------------------------------------------------------|
+| **id** | `string` | Document ID on which patching should be performed. |
+| **entity** | `object` | If the specified document is Not found, a new document will be created from this entity. |
+| **pathToArray** | `string` | The path to the array field. |
+| **arrayAdder** | `(JavaScriptArray) => void` | Function that modifies the array. | |
+
+
+
+{`class JavaScriptArray \{
+ push(...u); // Add a list of values to add to the array
+ removeAt(index); // Remove an item from position 'index' in the array
+\}
+`}
+
+
+### Session API using defer syntax
+
+
+
+{`session.advanced.defer(...commands);
+`}
+
+
+
+| Parameter | Type | Description |
+|--------------|------------|-----------------------------------------------------------------------------------------------------------|
+| **commands** | `object[]` | List of commands that will be executed on the server. Use the `PatchCommandData` command for patching. |
+
+
+
+{`class PatchCommandData \{
+ // ID of document to be patched
+ id; // string
+
+ // Change vector of document to be patched, can be null.
+ // Used to verify that the document was not changed before the patch is executed.
+ changeVector // string;
+
+ // Patch request to be performed on the document
+ patch; // A PatchRequest object
+
+ // Patch request to perform if no document with the specified ID was found
+ patchIfMissing; // A PatchRequest object
+\}
+`}
+
+
+
+
+
+{`class PatchRequest \{
+ // The JavaScript code to be run on the server
+ script; // string
+
+ // Parameters to be passed to the script
+ values:; // Dictionary
+
+ // It is highly recommend to use the script with the parameters.
+ // This allows RavenDB to cache scripts and boost performance.
+ // The parameters are accessed in the script via the \`args\` object.
+\}
+`}
+
+
+### Operations API syntax
+
+* Learn more about using operations in this [Operations overview](../../../client-api/operations/what-are-operations.mdx).
+
+
+
+{`const patchOperation = new PatchOperation(id, changeVector, patch);
+
+const patchOperation = new PatchOperation(id, changeVector, patch, patchIfMissing,
+ skipPatchIfChangeVectorMismatch);
+`}
+
+
+
+| Constructor | Type | Description |
+|--------------------------------------|----------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **id** | `string` | ID of the document to be patched. |
+| **changeVector** | `string` | Change vector of the document to be patched. Used to verify that the document was not modified before the patch is executed. Can be null. |
+| **patch** | `PatchRequest` | Patch request to perform on the document. |
+| **patchIfMissing** | `PatchRequest` | Patch request to perform if the specified document is not found. Will run only if no `changeVector` was passed. Can be null. |
+| **skipPatchIfChangeVectorMismatch** | `boolean` | `true` - do not patch if the document has been modified. `false` (Default) - execute the patch even if document has been modified.
An exception is thrown if: this param is `false` + `changeVector` has value + document with that ID and change vector was not found. |
+### List of script methods syntax
+
+* For a complete list of JavaScript methods available in patch scripts,
+ refer to [Knowledge Base: JavaScript Engine](../../../server/kb/javascript-engine.mdx#predefined-javascript-functions).
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/patching/json-patch-syntax.mdx b/versioned_docs/version-7.1/client-api/operations/patching/json-patch-syntax.mdx
new file mode 100644
index 0000000000..0ae772b926
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/patching/json-patch-syntax.mdx
@@ -0,0 +1,249 @@
+---
+title: "Patching: JSON Patch Syntax"
+hide_table_of_contents: true
+sidebar_label: JSON Patch Syntax
+sidebar_position: 2
+---
+
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+# Patching: JSON Patch Syntax
+
+
+
+* You can use the **JSON Patch Syntax** from your client to apply changes
+ to RavenDB documents via JSON objects.
+
+* A JSON Patch is a document constructed of JSON objects, each containing
+ the ID of a target (RavenDB) document and a patch operation to be applied
+ to this document.
+
+* Since the operation is executed in a single request to a database, the JSON Patch command is performed in a single write [transaction](../../../client-api/faq/transaction-support.mdx).
+
+* JSON Patch operations include -
+ * **Add** a document property
+ * **Remove** a document property
+ * **Replace** the contents of a document property
+ * **Copy** the contents of one document property to another
+ * **Move** the contents of one document property to another
+ * **Test** whether the patching succeeded
+
+* In this page:
+ * [JSON Patches](../../../client-api/operations/patching/json-patch-syntax.mdx#json-patches)
+ * [Running JSON Patches](../../../client-api/operations/patching/json-patch-syntax.mdx#running-json-patches)
+ * [Patch Operations](../../../client-api/operations/patching/json-patch-syntax.mdx#patch-operations)
+ * [Add Document Property](../../../client-api/operations/patching/json-patch-syntax.mdx#add-operation)
+ * [Remove Document Property](../../../client-api/operations/patching/json-patch-syntax.mdx#remove-document-property)
+ * [Replace Document Property Contents](../../../client-api/operations/patching/json-patch-syntax.mdx#replace-document-property-contents)
+ * [Copy Document Property Contents to Another Property](../../../client-api/operations/patching/json-patch-syntax.mdx#copy-document-property-contents-to-another-property)
+ * [Move Document Property Contents to Another Property](../../../client-api/operations/patching/json-patch-syntax.mdx#move-document-property-contents-to-another-property)
+ * [Test Patching Operation](../../../client-api/operations/patching/json-patch-syntax.mdx#test-patching-operation)
+ * [Additional JSON Patching Options](../../../client-api/operations/patching/json-patch-syntax.mdx#additional-json-patching-options)
+
+
+## JSON Patches
+
+* Similar to other forms of patching, JSON Patches can be used by a client to
+ swiftly change any number of documents without loading and editing the documents
+ locally first.
+
+* A series of JSON objects, each containing a patch operation and a document ID,
+ are added to an ASP `JsonPatchDocument` object that is sent to the server for
+ execution.
+### When are JSON Patches Used?
+
+JSON Patches include no RQL or C# code, and offer a limited set of operations
+in relation to other patching methods.
+Users may still prefer them over other methods when, for example -
+
+ * A client of multiple databases of different brands prefers broadcasting patches
+ with a common syntax to all databases.
+ * It is easier for an automated process that builds and applies patches,
+ to send JSON patches.
+
+
+
+## Running JSON Patches
+
+To run JSON patches -
+
+* Use the `Microsoft.AspNetCore.JsonPatch` namespace from your code.
+ E.g. `using Microsoft.AspNetCore.JsonPatch;`
+* Create a `JsonPatchDocument` instance and append your patches to it.
+* Pass your Json Patch Document to RavenDB's `JsonPatchOperation` operation to run the patches.
+ * `JsonPatchOperation` Parameters
+
+ | Parameters | Type | Description |
+ |:-------------|:-------------|:-------------|
+ | id | `string` | The ID of the document we want to patch |
+ | jsonPatchDocument | `JsonPatchDocument` | Patches document |
+
+
+
+
+
+## Patch Operations
+
+### Add Operation
+
+Use the `Add` operation to add a document property or an array element.
+
+* **Method Parameters**
+
+ | Parameters | Type | Description |
+ |:-------------|:-------------|:-------------|
+ | path | `string` | Path to the property we want to add |
+ | value | `object` | Property value |
+
+* **Code Sample - Add a document property**
+
+
+{`var patchesDocument = new JsonPatchDocument();
+patchesDocument.Add("/PropertyName", "Contents");
+store.Operations.Send(new JsonPatchOperation(documentId, patchesDocument));
+`}
+
+
+### Remove Document Property
+
+Use the `Remove` operation to remove a document property or an array element.
+
+* **Method Parameters**
+
+ | Parameters | Type | Description |
+ |:-------------|:-------------|:-------------|
+ | path | `string` | Path to the property we want to remove |
+
+* **Code Sample - Remove a document property**
+
+
+{`patchesDocument = new JsonPatchDocument();
+patchesDocument.Remove("/PropertyName");
+store.Operations.Send(new JsonPatchOperation(documentId, patchesDocument));
+`}
+
+
+### Replace Document Property Contents
+
+Use the `Replace` operation to replace the contents of a document property or an array element
+
+* **Method Parameters**
+
+ | Parameters | Type | Description |
+ |:-------------|:-------------|:-------------|
+ | path | `string` | Path to the property whose contents we want to replace |
+ | value | `object` | New contents |
+
+* **Code Sample - Replace a document property**
+
+
+{`patchesDocument = new JsonPatchDocument();
+// Replace document property contents with a new value (100)
+patchesDocument.Replace("/PropertyName", "NewContents");
+store.Operations.Send(new JsonPatchOperation(documentId, patchesDocument));
+`}
+
+
+### Copy Document Property Contents to Another Property
+
+Use the `Copy` operation to copy the contents of one document property array element to another
+
+* **Method Parameters**
+
+ | Parameters | Type | Description |
+ |:-------------|:-------------|:-------------|
+ | from | `string` | Path to the property we want to copy |
+ | path| `string` | Path to the property we want to copy to |
+
+* **Code Sample - Copy document property contents**
+
+
+{`patchesDocument = new JsonPatchDocument();
+// Copy document property contents to another document property
+patchesDocument.Copy("/PropertyName1", "/PropertyName2");
+store.Operations.Send(new JsonPatchOperation(documentId, patchesDocument));
+`}
+
+
+### Move Document Property Contents to Another Property
+
+Use the `Move` operation to move the contents of one document property or array element to another
+
+* **Method Parameters**
+
+ | Parameters | Type | Description |
+ |:-------------|:-------------|:-------------|
+ | from | `string` | Path to the property whose contents we want to move |
+ | path| `string` | Path to the property we want to move the contents to |
+
+* **Code Sample - Move document property contents**
+
+
+{`patchesDocument = new JsonPatchDocument();
+// Move document property contents to another document property
+patchesDocument.Move("/PropertyName1", "/PropertyName2");
+store.Operations.Send(new JsonPatchOperation(documentId, patchesDocument));
+`}
+
+
+### Test Patching Operation
+
+Use the `Test` operation to verify patching operations.
+If the test fails, all patching operations included in the patches document will be revoked
+and a `RavenException` exception will be thrown.
+
+* **Method Parameters**
+
+ | Parameters | Type | Description |
+ |:-------------|:-------------|:-------------|
+ | path | `string` | Path to the property we want to test |
+ | value | `object` | Value to compare `path` with |
+
+
+* **Code Sample - Test Patching**
+
+
+
+{`patchesDocument = new JsonPatchDocument();
+patchesDocument.Test("/PropertyName", "Value"); // Compare property contents with the value
+ // Revoke all patch operations if the test fails
+try
+\{
+ store.Operations.Send(new JsonPatchOperation(documentId, patchesDocument));
+\}
+catch (RavenException e)
+\{
+ // handle the exception
+\}
+`}
+
+
+
+
+
+## Additional JSON Patching Options
+
+The samples given above remain simple, showing how to manipulate document properties.
+Note that JSON Patches have additional options, like the manipulation of array or list elements:
+
+* **Add an array element**
+
+
+{`patchesDocument = new JsonPatchDocument();
+// Use the path parameter to add an array element
+patchesDocument.Add("/ArrayName/12", "Contents");
+store.Operations.Send(new JsonPatchOperation(documentId, patchesDocument));
+`}
+
+
+
+You can learn more about additional JSON patching options in the [JSON Patch RFC](https://datatracker.ietf.org/doc/html/rfc6902),
+among other resources.
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/patching/set-based.mdx b/versioned_docs/version-7.1/client-api/operations/patching/set-based.mdx
new file mode 100644
index 0000000000..950d41d7ff
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/patching/set-based.mdx
@@ -0,0 +1,44 @@
+---
+title: "Set-Based Patch Operations"
+hide_table_of_contents: true
+sidebar_label: Set Based
+sidebar_position: 1
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import SetBasedCsharp from './_set-based-csharp.mdx';
+import SetBasedJava from './_set-based-java.mdx';
+import SetBasedNodejs from './_set-based-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/patching/single-document.mdx b/versioned_docs/version-7.1/client-api/operations/patching/single-document.mdx
new file mode 100644
index 0000000000..7a0a616752
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/patching/single-document.mdx
@@ -0,0 +1,41 @@
+---
+title: "Single Document Patch Operations"
+hide_table_of_contents: true
+sidebar_label: Single Document
+sidebar_position: 0
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import SingleDocumentCsharp from './_single-document-csharp.mdx';
+import SingleDocumentJava from './_single-document-java.mdx';
+import SingleDocumentNodejs from './_single-document-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/_add-database-node-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/_add-database-node-csharp.mdx
new file mode 100644
index 0000000000..f406565b19
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/_add-database-node-csharp.mdx
@@ -0,0 +1,117 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* When creating a database, you can specify the number of replicas for that database.
+ This determines the number of database instances in the database-group.
+
+* **The number of replicas can be dynamically increased** even after the database is up and running,
+ by adding more nodes to the database-group.
+
+* The nodes added must already exist in the [cluster topology](../../../server/clustering/rachis/cluster-topology.mdx).
+
+* Once a new node is added to the database-group,
+ the cluster assigns a mentor node (from the existing database-group nodes) to update the new node.
+
+* In this page:
+ * [Add database node - random](../../../client-api/operations/server-wide/add-database-node.mdx#add-database-node---random)
+ * [Add database node - specific](../../../client-api/operations/server-wide/add-database-node.mdx#add-database-node---specific)
+ * [Syntax](../../../client-api/operations/server-wide/add-database-node.mdx#syntax)
+
+## Add database node - random
+
+* Use `AddDatabaseNodeOperation` to add another database-instance to the database-group.
+* The node added will be a random node from the existing cluster nodes.
+
+
+
+
+{`// Create the AddDatabaseNodeOperation
+// Add a random node to 'Northwind' database-group
+var addDatabaseNodeOp = new AddDatabaseNodeOperation("Northwind");
+
+// Execute the operation by passing it to Maintenance.Server.Send
+DatabasePutResult result = store.Maintenance.Server.Send(addDatabaseNodeOp);
+
+// Can access the new topology
+var numberOfReplicas = result.Topology.AllNodes.Count();
+`}
+
+
+
+
+{`// Create the AddDatabaseNodeOperation
+// Add a random node to 'Northwind' database-group
+var addDatabaseNodeOp = new AddDatabaseNodeOperation("Northwind");
+
+// Execute the operation by passing it to Maintenance.Server.SendAsync
+DatabasePutResult result = await store.Maintenance.Server.SendAsync(addDatabaseNodeOp);
+
+// Can access the new topology
+var numberOfReplicas = result.Topology.AllNodes.Count();
+`}
+
+
+
+
+
+
+## Add database node - specific
+
+* You can specify the node tag to add.
+* This node must already exist in the cluster topology.
+
+
+
+
+{`// Create the AddDatabaseNodeOperation
+// Add node C to 'Northwind' database-group
+var addDatabaseNodeOp = new AddDatabaseNodeOperation("Northwind", "C");
+
+// Execute the operation by passing it to Maintenance.Server.Send
+DatabasePutResult result = store.Maintenance.Server.Send(addDatabaseNodeOp);
+`}
+
+
+
+
+{`// Create the AddDatabaseNodeOperation
+// Add node C to 'Northwind' database-group
+var addDatabaseNodeOp = new AddDatabaseNodeOperation("Northwind", "C");
+
+// Execute the operation by passing it to Maintenance.Server.SendAsync
+DatabasePutResult result = await store.Maintenance.Server.SendAsync(addDatabaseNodeOp);
+`}
+
+
+
+
+
+
+## Syntax
+
+
+
+{`public AddDatabaseNodeOperation(string databaseName, string nodeTag = null)
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **databaseName** | `string` | Name of a database for which to add the node. |
+| **nodeTag** | `string` | Tag of node to add. Default: a random node from the existing cluster topology will be added. |
+
+| Object returned by Send operation: `DatabasePutResult` | Type | Description |
+| - | - | - |
+| RaftCommandIndex | `long` | Index of the raft command that was executed |
+| Name | `string` | Database name |
+| Topology | `DatabaseTopology` | The database topology |
+| NodesAddedTo | `List` | New nodes added to the cluster topology. Will be 0 for this operation. |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/_add-database-node-nodejs.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/_add-database-node-nodejs.mdx
new file mode 100644
index 0000000000..432c806220
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/_add-database-node-nodejs.mdx
@@ -0,0 +1,92 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* When creating a database, you can specify the number of replicas for that database.
+ This determines the number of database instances in the database-group.
+
+* **The number of replicas can be dynamically increased** even after the database is up and running,
+ by adding more nodes to the database-group.
+
+* The nodes added must already exist in the [cluster topology](../../../server/clustering/rachis/cluster-topology.mdx).
+
+* Once a new node is added to the database-group,
+ the cluster assigns a mentor node (from the existing database-group nodes) to update the new node.
+
+* In this page:
+ * [Add database node - random](../../../client-api/operations/server-wide/add-database-node.mdx#add-database-node---random)
+ * [Add database node - specific](../../../client-api/operations/server-wide/add-database-node.mdx#add-database-node---specific)
+ * [Syntax](../../../client-api/operations/server-wide/add-database-node.mdx#syntax)
+
+## Add database node - random
+
+* Use `AddDatabaseNodeOperation` to add another database-instance to the database-group.
+* The node added will be a random node from the existing cluster nodes.
+
+
+
+{`// Create the AddDatabaseNodeOperation
+// Add a random node to 'Northwind' database-group
+const addDatabaseNodeOp = new AddDatabaseNodeOperation("Northwind");
+
+// Execute the operation by passing it to maintenance.server.send
+const result = await documentStore.maintenance.server.send(addDatabaseNodeOp);
+
+// Can access the new topology
+const numberOfReplicas = getAllNodesFromTopology(result.topology).length;
+`}
+
+
+
+
+
+## Add database node - specific
+
+* You can specify the node tag to add.
+* This node must already exist in the cluster topology.
+
+
+
+{`// Create the AddDatabaseNodeOperation
+// Add node C to 'Northwind' database-group
+const addDatabaseNodeOp = new AddDatabaseNodeOperation("Northwind", "C"));
+
+// Execute the operation by passing it to maintenance.server.send
+const result = await documentStore.maintenance.server.send(addDatabaseNodeOp);
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`const addDatabaseNodeOp = new AddDatabaseNodeOperation(databaseName, nodeTag?);
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **databaseName** | `string` | Name of a database for which to add the node. |
+| **nodeTag** | `string` | Tag of node to add. Default: If not passed then a random node from the existing cluster topology will be added. |
+
+| Object returned by send operation has: | Type | Description |
+| - | - | - |
+| raftCommandIndex | `number` | Index of the raft command that was executed |
+| name | `string` | Database name |
+| topology | `DatabaseTopology` | The database topology |
+| nodesAddedTo | `string[]` | New nodes added to the cluster topology. Will be 0 for this operation. |
+
+
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/_add-database-node-php.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/_add-database-node-php.mdx
new file mode 100644
index 0000000000..64e98a4f11
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/_add-database-node-php.mdx
@@ -0,0 +1,90 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* When creating a database, you can specify the number of replicas for that database.
+ This determines the number of database instances in the database-group.
+
+* **The number of replicas can be dynamically increased** even after the database is up and running,
+ by adding more nodes to the database-group.
+
+* The nodes added must already exist in the [cluster topology](../../../server/clustering/rachis/cluster-topology.mdx).
+
+* Once a new node is added to the database-group,
+ the cluster assigns a mentor node (from the existing database-group nodes) to update the new node.
+
+* In this page:
+ * [Add database node - random](../../../client-api/operations/server-wide/add-database-node.mdx#add-database-node---random)
+ * [Add database node - specific](../../../client-api/operations/server-wide/add-database-node.mdx#add-database-node---specific)
+ * [Syntax](../../../client-api/operations/server-wide/add-database-node.mdx#syntax)
+
+## Add database node - random
+
+* Use `AddDatabaseNodeOperation` to add another database-instance to the database-group.
+* The node added will be a random node from the existing cluster nodes.
+
+
+
+{`// Create the AddDatabaseNodeOperation
+// Add a random node to 'Northwind' database-group
+$addDatabaseNodeOp = new AddDatabaseNodeOperation("Northwind");
+
+// Execute the operation by passing it to Maintenance.Server.Send
+/** @var DatabasePutResult $result */
+$result = $store->maintenance()->server()->send($addDatabaseNodeOp);
+
+// Can access the new topology
+$numberOfReplicas = count($result->getTopology()->getMembers());
+`}
+
+
+
+
+
+## Add database node - specific
+
+* You can specify the node tag to add.
+* This node must already exist in the cluster topology.
+
+
+
+{`// Create the AddDatabaseNodeOperation
+// Add node C to 'Northwind' database-group
+$addDatabaseNodeOp = new AddDatabaseNodeOperation("Northwind", "C");
+
+// Execute the operation by passing it to Maintenance.Server.Send
+/** @var DatabasePutResult $result */
+$result = $store->maintenance()->server()->send($addDatabaseNodeOp);
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`AddDatabaseNodeOperation(?string $databaseName, ?string $nodeTag = null)
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **$databaseName** | `?string` | Name of a database for which to add the node. |
+| **$nodeTag** | `?string` | Tag of node to add. Default: a random node from the existing cluster topology will be added. |
+
+| Object returned by Send operation: `DatabasePutResult` | Type | Description |
+| - | - | - |
+| $name | `string` | Database name |
+| $topology | `DatabaseTopology` | The database topology |
+| $nodesAddedTo | `StringArray` | New nodes added to the cluster topology. Will be 0 for this operation. |
+| $raftCommandIndex | `int` | Raft command index |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/_add-database-node-python.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/_add-database-node-python.mdx
new file mode 100644
index 0000000000..cc0517815d
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/_add-database-node-python.mdx
@@ -0,0 +1,89 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* When creating a database, you can specify the number of replicas for that database.
+ This determines the number of database instances in the database-group.
+
+* **The number of replicas can be dynamically increased** even after the database is up and running,
+ by adding more nodes to the database-group.
+
+* The nodes added must already exist in the [cluster topology](../../../server/clustering/rachis/cluster-topology.mdx).
+
+* Once a new node is added to the database-group,
+ the cluster assigns a mentor node (from the existing database-group nodes) to update the new node.
+
+* In this page:
+ * [Add database node - random](../../../client-api/operations/server-wide/add-database-node.mdx#add-database-node---random)
+ * [Add database node - specific](../../../client-api/operations/server-wide/add-database-node.mdx#add-database-node---specific)
+ * [Syntax](../../../client-api/operations/server-wide/add-database-node.mdx#syntax)
+
+## Add database node - random
+
+* Use `AddDatabaseNodeOperation` to add another database-instance to the database-group.
+* The node added will be a random node from the existing cluster nodes.
+
+
+
+{`# Create the AddDatabaseNodeOperation
+# Add a random node to 'Northwind' database-group
+add_database_node_op = AddDatabaseNodeOperation("Northwind")
+
+# Execute the operation by passing it to maintenance.server.send
+result = store.maintenance.server.send(add_database_node_op)
+
+# Can access the new topology
+number_of_replicas = len(result.topology.all_nodes)
+`}
+
+
+
+
+
+## Add database node - specific
+
+* You can specify the node tag to add.
+* This node must already exist in the cluster topology.
+
+
+
+{`# Create the AddDatabaseNodeOperation
+# Add node C to 'Northwind
+add_database_node_op = AddDatabaseNodeOperation("Northwind", "C")
+
+# Execute the operation by passing it to maintenance.server.send
+result = store.maintenance.server.send(add_database_node_op)
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`class AddDatabaseNodeOperation(ServerOperation[DatabasePutResult]):
+ def __init__(self, database_name: str, node_tag: str = None): ...
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **database_name** | `str` | Name of a database for which to add the node. |
+| **node_tag** | `str` | Tag of node to add. Default: a random node from the existing cluster topology will be added. |
+
+| Object returned by Send operation: `DatabasePutResult` | Type | Description |
+| - | - | - |
+| raft_command_index | `int` | Index of the raft command that was executed |
+| name | `str` | Database name |
+| topology | `DatabaseTopology` | The database topology |
+| nodes_added_to | `list` | New nodes added to the cluster topology. Will be 0 for this operation. |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/_category_.json b/versioned_docs/version-7.1/client-api/operations/server-wide/_category_.json
new file mode 100644
index 0000000000..581d93d734
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/_category_.json
@@ -0,0 +1,4 @@
+{
+ "position": 4,
+ "label": Server-Maintenance,
+}
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/_compact-database-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/_compact-database-csharp.mdx
new file mode 100644
index 0000000000..6b121e989a
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/_compact-database-csharp.mdx
@@ -0,0 +1,346 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use the `CompactDatabaseOperation` compaction operation to **removes empty gaps on disk**
+ that still occupy space after deletes.
+ You can choose whether to compact _documents_ and/or _selected indexes_.
+
+* **During compaction the database will be offline**.
+ The operation is a executed asynchronously as a background operation and can be awaited.
+
+* The operation will **compact the database on one node**.
+ To compact all database-group nodes, the command must be sent to each node separately.
+
+* **Target node**:
+ By default, the operation will be executed on the server node that is defined by the
+ [client configuration](../../../client-api/configuration/load-balance/overview.mdx#client-logic-for-choosing-a-node).
+ The operation can be executed on a specific node by using the
+ [ForNode](../../../client-api/operations/how-to/switch-operations-to-a-different-node.mdx) method.
+
+* **Target database**:
+ The database to compact is specified in `CompactSettings` (see examples below).
+ An exception is thrown if the specified database doesn't exist on the server node.
+
+* In this page:
+ * [Examples](../../../client-api/operations/server-wide/compact-database.mdx#examples):
+ * [Compact documents](../../../client-api/operations/server-wide/compact-database.mdx#examples)
+ * [Compact specific indexes](../../../client-api/operations/server-wide/compact-database.mdx#compact-specific-indexes)
+ * [Compact all indexes](../../../client-api/operations/server-wide/compact-database.mdx#compact-all-indexes)
+ * [Compact on other nodes](../../../client-api/operations/server-wide/compact-database.mdx#compact-on-other-nodes)
+ * [Compaction triggers compression](../../../client-api/operations/server-wide/compact-database.mdx#compaction-triggers-compression)
+ * [Compact from Studio](../../../client-api/operations/server-wide/compact-database.mdx#compact-from-studio)
+ * [Syntax](../../../client-api/operations/server-wide/compact-database.mdx#syntax)
+
+
+## Examples
+
+#### Compact documents:
+
+The following example will compact only **documents** for the specified database.
+
+
+
+
+{`// Define the compact settings
+CompactSettings settings = new CompactSettings
+{
+ // Database to compact
+ DatabaseName = "Northwind",
+
+ // Set 'Documents' to true to compact all documents in database
+ // Indexes are not set and will not be compacted
+ Documents = true
+};
+
+// Define the compact operation, pass the settings
+IServerOperation compactOp = new CompactDatabaseOperation(settings);
+
+// Execute compaction by passing the operation to Maintenance.Server.Send
+Operation operation = documentStore.Maintenance.Server.Send(compactOp);
+
+// Wait for operation to complete, during compaction the database is offline
+operation.WaitForCompletion();
+`}
+
+
+
+
+{`// Define the compact settings
+CompactSettings settings = new CompactSettings
+{
+ // Database to compact
+ DatabaseName = "Northwind",
+
+ // Set 'Documents' to true to compact all documents in database
+ // Indexes are not set and will not be compacted
+ Documents = true
+};
+
+// Define the compact operation, pass the settings
+IServerOperation compactOp = new CompactDatabaseOperation(settings);
+
+// Execute compaction by passing the operation to Maintenance.Server.SendAsync
+Operation operation = await documentStore.Maintenance.Server.SendAsync(compactOp);
+
+// Wait for operation to complete, during compaction the database is offline
+await operation.WaitForCompletionAsync().ConfigureAwait(false);
+`}
+
+
+
+#### Compact specific indexes:
+
+The following example will compact only specific indexes.
+
+
+
+
+{`// Define the compact settings
+CompactSettings settings = new CompactSettings
+{
+ // Database to compact
+ DatabaseName = "Northwind",
+
+ // Setting 'Documents' to false will compact only the specified indexes
+ Documents = false,
+
+ // Specify which indexes to compact
+ Indexes = new[] { "Orders/Totals", "Orders/ByCompany" },
+
+ // Optimize indexes is Lucene's feature to gain disk space and efficiency
+ // Set whether to skip this optimization when compacting the indexes
+ SkipOptimizeIndexes = false
+};
+
+// Define the compact operation, pass the settings
+IServerOperation compactOp = new CompactDatabaseOperation(settings);
+
+// Execute compaction by passing the operation to Maintenance.Server.Send
+Operation operation = documentStore.Maintenance.Server.Send(compactOp);
+// Wait for operation to complete
+operation.WaitForCompletion();
+`}
+
+
+
+
+{`// Define the compact settings
+CompactSettings settings = new CompactSettings
+{
+ // Database to compact
+ DatabaseName = "Northwind",
+
+ // Setting 'Documents' to false will compact only the specified indexes
+ Documents = false,
+
+ // Specify which indexes to compact
+ Indexes = new[] { "Orders/Totals", "Orders/ByCompany" },
+
+ // Optimize indexes is Lucene's feature to gain disk space and efficiency
+ // Set whether to skip this optimization when compacting the indexes
+ SkipOptimizeIndexes = false
+};
+
+// Define the compact operation, pass the settings
+IServerOperation compactOp = new CompactDatabaseOperation(settings);
+
+// Execute compaction by passing the operation to Maintenance.Server.SendAsync
+Operation operation = await documentStore.Maintenance.Server.SendAsync(compactOp);
+// Wait for operation to complete
+await operation.WaitForCompletionAsync().ConfigureAwait(false);
+`}
+
+
+
+#### Compact all indexes:
+
+The following example will compact all indexes and documents.
+
+
+
+
+{`// Get all indexes names in the database using the 'GetIndexNamesOperation' operation
+// Use 'ForDatabase' if the target database is different than the default database defined on the store
+string[] allIndexNames =
+ documentStore.Maintenance.ForDatabase("Northwind")
+ .Send(new GetIndexNamesOperation(0, int.MaxValue));
+
+// Define the compact settings
+CompactSettings settings = new CompactSettings
+{
+ DatabaseName = "Northwind", // Database to compact
+
+ Documents = true, // Compact all documents
+
+ Indexes = allIndexNames, // All indexes will be compacted
+
+ SkipOptimizeIndexes = true // Skip Lucene's indexes optimization
+};
+
+// Define the compact operation, pass the settings
+IServerOperation compactOp = new CompactDatabaseOperation(settings);
+
+// Execute compaction by passing the operation to Maintenance.Server.Send
+Operation operation = documentStore.Maintenance.Server.Send(compactOp);
+// Wait for operation to complete
+operation.WaitForCompletion();
+`}
+
+
+
+
+{`// Get all indexes names in the database using the 'GetIndexNamesOperation' operation
+// Use 'ForDatabase' if the target database is different than the default database defined on the store
+string[] allIndexNames =
+ documentStore.Maintenance.ForDatabase("Northwind")
+ .Send(new GetIndexNamesOperation(0, int.MaxValue));
+
+// Define the compact settings
+CompactSettings settings = new CompactSettings
+{
+ DatabaseName = "Northwind", // Database to compact
+
+ Documents = true, // Compact all documents
+
+ Indexes = allIndexNames, // All indexes will be compacted
+
+ SkipOptimizeIndexes = true // Skip Lucene's indexes optimization
+};
+
+// Define the compact operation, pass the settings
+IServerOperation compactOp = new CompactDatabaseOperation(settings);
+
+// Execute compaction by passing the operation to Maintenance.Server.SendAsync
+Operation operation = await documentStore.Maintenance.Server.SendAsync(compactOp);
+// Wait for operation to complete
+await operation.WaitForCompletionAsync();
+`}
+
+
+
+#### Compact on other nodes:
+
+* By default, an operation executes on the server node that is defined by the [client configuration](../../../client-api/configuration/load-balance/overview.mdx#client-logic-for-choosing-a-node).
+* The following example will compact the database on all [member](../../../server/clustering/rachis/cluster-topology.mdx#nodes-states-and-types) nodes from its database-group topology.
+ `ForNode` is used to execute the operation on a specific node.
+
+
+
+
+{`// Get all member nodes in the database-group using the 'GetDatabaseRecordOperation' operation
+List allMemberNodes =
+ documentStore.Maintenance.Server.Send(new GetDatabaseRecordOperation("Northwind"))
+ .Topology.Members;
+
+// Define the compact settings as needed
+CompactSettings settings = new CompactSettings
+{
+ // Database to compact
+ DatabaseName = "Northwind",
+
+ //Compact all documents in database
+ Documents = true
+};
+
+// Execute the compact operation on each member node
+foreach (string nodeTag in allMemberNodes)
+{
+ // Define the compact operation, pass the settings
+ IServerOperation compactOp = new CompactDatabaseOperation(settings);
+
+ // Execute the operation on a specific node
+ // Use \`ForNode\` to specify the node to operate on
+ Operation operation = documentStore.Maintenance.Server.ForNode(nodeTag).Send(compactOp);
+ // Wait for operation to complete
+ operation.WaitForCompletion();
+}
+`}
+
+
+
+
+{`// Get all member nodes in the database-group using the 'GetDatabaseRecordOperation' operation
+List allMemberNodes =
+ documentStore.Maintenance.Server.Send(new GetDatabaseRecordOperation("Northwind"))
+ .Topology.Members;
+
+// Define the compact settings as needed
+CompactSettings settings = new CompactSettings
+{
+ // Database to compact
+ DatabaseName = "Northwind",
+
+ //Compact all documents in database
+ Documents = true
+};
+
+// Execute the compact operation on each member node
+foreach (string nodeTag in allMemberNodes)
+{
+ // Define the compact operation, pass the settings
+ IServerOperation compactOp = new CompactDatabaseOperation(settings);
+
+ // Execute the operation on a specific node
+ // Use \`ForNode\` to specify the node to operate on
+ Operation operation = await documentStore.Maintenance.Server.ForNode(nodeTag).SendAsync(compactOp);
+ // Wait for operation to complete
+ await operation.WaitForCompletionAsync();
+}
+`}
+
+
+
+
+
+
+## Compaction triggers compression
+
+* When document [compression](../../../server/storage/documents-compression.mdx) is turned on, compression is applied to the documents when:
+ * **New** documents that are created and saved.
+ * **Existing** documents that are modified and saved.
+
+* You can use the [compaction](../../../client-api/operations/server-wide/compact-database.mdx) operation to **compress existing documents without having to modify and save** them.
+ Executing compaction triggers compression on ALL existing documents for the collections that are configured for compression.
+
+* Learn more about Compression -vs- Compaction [here](../../../server/storage/documents-compression.mdx#compression--vs--compaction).
+
+
+
+## Compact from Studio
+
+* Compaction can be triggered from the [Storage Report](../../../studio/database/stats/storage-report.mdx) view in the Studio.
+ The operation will compact the database only on the node being viewed (node info is in the Studio footer).
+
+* To compact the database on another node,
+ simply trigger compaction from the Storage Report view in a browser tab opened for that other node.
+
+
+
+## Syntax
+
+
+
+{`public CompactDatabaseOperation(CompactSettings compactSettings)
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **compactSettings** | `CompactSettings` | Settings for the compact operation |
+
+| `CompactSettings` | Type | Description |
+| - | - | - |
+| **DatabaseName** | `string` | Name of database to compact. Mandatory param. |
+| **Documents** | `bool` | Indicates if documents should be compacted. Optional param. |
+| **Indexes** | `string[]` | List of index names to compact. Optional param. |
+| **SkipOptimizeIndexes** | `bool` | `true` - Skip Lucene's index optimization while compacting `false` - Lucene's index optimization will take place while compacting |
+| | | **Note**: Either _Documents_ or _Indexes_ (or both) must be specified |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/_compact-database-nodejs.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/_compact-database-nodejs.mdx
new file mode 100644
index 0000000000..b0e37bef04
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/_compact-database-nodejs.mdx
@@ -0,0 +1,230 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use the `CompactDatabaseOperation` compaction operation to **removes empty gaps on disk**
+ that still occupy space after deletes.
+ You can choose whether to compact _documents_ and/or _selected indexes_.
+
+* **During compaction the database will be offline**.
+ The operation is a executed asynchronously as a background operation and can be awaited.
+
+* The operation will **compact the database on one node**.
+ To compact all database-group nodes, the command must be sent to each node separately.
+
+* **Target node**:
+ By default, the operation will be executed on the server node that is defined by the
+ [client configuration](../../../client-api/configuration/load-balance/overview.mdx#client-logic-for-choosing-a-node).
+ The operation can be executed on a specific node by using the
+ [forNode](../../../client-api/operations/how-to/switch-operations-to-a-different-node.mdx) method.
+
+* **Target database**:
+ The database to compact is specified in `CompactSettings` (see examples below).
+ An exception is thrown if the specified database doesn't exist on the server node.
+
+* In this page:
+ * [Examples](../../../client-api/operations/server-wide/compact-database.mdx#examples):
+ * [Compact documents](../../../client-api/operations/server-wide/compact-database.mdx#examples)
+ * [Compact specific indexes](../../../client-api/operations/server-wide/compact-database.mdx#compact-specific-indexes)
+ * [Compact all indexes](../../../client-api/operations/server-wide/compact-database.mdx#compact-all-indexes)
+ * [Compact on other nodes](../../../client-api/operations/server-wide/compact-database.mdx#compact-on-other-nodes)
+ * [Compaction triggers compression](../../../client-api/operations/server-wide/compact-database.mdx#compaction-triggers-compression)
+ * [Compact from Studio](../../../client-api/operations/server-wide/compact-database.mdx#compact-from-studio)
+ * [Syntax](../../../client-api/operations/server-wide/compact-database.mdx#syntax)
+
+
+## Examples
+
+
+
+#### Compact documents
+
+* The following example will compact only **documents** for the specified database.
+
+
+
+{`// Define the compact settings
+const compactSettings = \{
+ // Database to compact
+ databaseName: "Northwind",
+
+ // Set 'documents' to true to compact all documents in database
+ // Indexes are not set and will not be compacted
+ documents: true
+\};
+
+// Define the compact operation, pass the settings
+const compactOp = new CompactDatabaseOperation(compactSettings);
+
+// Execute compaction by passing the operation to maintenance.server.send
+const asyncOperation = await documentStore.maintenance.server.send(compactOp);
+
+// Wait for operation to complete, during compaction the database is offline
+await asyncOperation.waitForCompletion();
+`}
+
+
+
+
+
+
+
+#### Compact specific indexes
+
+* The following example will compact only specific indexes.
+
+
+
+{`// Define the compact settings
+const compactSettings = \{
+ // Database to compact
+ databaseName: "Northwind",
+
+ // Setting 'documents' to false will compact only the specified indexes
+ documents: false,
+
+ // Specify which indexes to compact
+ indexes: ["Orders/Totals", "Orders/ByCompany"]
+\};
+
+// Define the compact operation, pass the settings
+const compactOp = new CompactDatabaseOperation(compactSettings);
+
+// Execute compaction by passing the operation to maintenance.server.send
+const asyncOperation = await documentStore.maintenance.server.send(compactOp);
+// Wait for operation to complete
+await asyncOperation.waitForCompletion();
+`}
+
+
+
+
+
+
+
+#### Compact all indexes
+
+* The following example will compact all indexes and documents.
+
+
+
+{`// Get all indexes names in the database using the 'GetIndexNamesOperation' operation
+// Use 'forDatabase' if the target database is different than the default database defined on the store
+const allIndexNames = await documentStore.maintenance.forDatabase("Northwind")
+ .send(new GetIndexNamesOperation(0, 50));
+
+// Define the compact settings
+const compactSettings = \{
+ databaseName: "Northwind", // Database to compact
+
+ documents: true, // Compact all documents
+
+ indexes: allIndexNames, // All indexes will be compacted
+\};
+
+// Define the compact operation, pass the settings
+const compactOp = new CompactDatabaseOperation(settings);
+
+// Execute compaction by passing the operation to maintenance.server.send
+const asyncOperation = await documentStore.maintenance.server.send(compactOp);
+// Wait for operation to complete
+await asyncOperation.waitForCompletion();
+`}
+
+
+
+
+
+
+
+#### Compact on other nodes
+
+* By default, an operation executes on the server node that is defined by the [client configuration](../../../client-api/configuration/load-balance/overview.mdx#client-logic-for-choosing-a-node).
+* The following example will compact the database on all [member](../../../server/clustering/rachis/cluster-topology.mdx#nodes-states-and-types) nodes from its database-group topology.
+ `forNode` is used to execute the operation on a specific node.
+
+
+
+{`// Get all member nodes in the database-group using the 'GetDatabaseRecordOperation' operation
+const databaseRecord =
+ await documentStore.maintenance.server.send(new GetDatabaseRecordOperation("Northwind"));
+const allMemberNodes = databaseRecord.topology.members;
+
+// Define the compact settings as needed
+const compactSettings = \{
+ // Database to compact
+ databaseName: "Northwind",
+
+ //Compact all documents in database
+ documents: true
+\};
+
+// Execute the compact operation on each member node
+for (let i = 0; i < allMemberNodes.length; i++) \{
+ // Define the compact operation, pass the settings
+ const compactOp = new CompactDatabaseOperation(compactSettings);
+
+ // Execute the operation on a specific node
+ // Use \`forNode\` to specify the node to operate on
+ const serverOpExecutor = await documentStore.maintenance.server.forNode(allMemberNodes[i]);
+ const asyncOperation = await serverOpExecutor.send(compactOp);
+
+ // Wait for operation to complete
+ await asyncOperation.waitForCompletion();
+\}
+`}
+
+
+
+
+
+
+## Compaction triggers compression
+
+* When document [compression](../../../server/storage/documents-compression.mdx) is turned on, compression is applied to the documents when:
+ * **New** documents that are created and saved.
+ * **Existing** documents that are modified and saved.
+
+* You can use the [compaction](../../../client-api/operations/server-wide/compact-database.mdx) operation to **compress existing documents without having to modify and save** them.
+ Executing compaction triggers compression on ALL existing documents for the collections that are configured for compression.
+
+* Learn more about Compression -vs- Compaction [here](../../../server/storage/documents-compression.mdx#compression--vs--compaction).
+
+
+
+## Compact from Studio
+
+* Compaction can be triggered from the [Storage Report](../../../studio/database/stats/storage-report.mdx) view in the Studio.
+ The operation will compact the database only on the node being viewed (node info is in the Studio footer).
+
+* To compact the database on another node,
+ simply trigger compaction from the Storage Report view in a browser tab opened for that other node.
+
+
+
+## Syntax
+
+
+
+{`const compactOperation = new CompactDatabaseOperation(compactSettings);
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **compactSettings** | `object` | Settings for the compact operation. See object fields below. |
+
+| compactSettings field | Type | Description |
+| - | - | - |
+| **databaseName** | `string` | Name of database to compact. Mandatory param. |
+| **documents** | `boolean` | Indicates if documents should be compacted. Optional param. |
+| **indexes** | `string[]` | List of index names to compact. Optional param. |
+| | | **Note**: Either _Documents_ or _Indexes_ (or both) must be specified |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/_compact-database-php.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/_compact-database-php.mdx
new file mode 100644
index 0000000000..86ad896f67
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/_compact-database-php.mdx
@@ -0,0 +1,220 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use the `CompactDatabaseOperation` compaction operation to **removes empty gaps on disk**
+ that still occupy space after deletes.
+ You can choose whether to compact _documents_ and/or _selected indexes_.
+
+* **During compaction the database will be offline**.
+ The operation is a executed asynchronously as a background operation and can be waited for
+ using `waitForCompletion()`.
+
+* The operation will **compact the database on one node**.
+ To compact all database-group nodes, the command must be sent to each node separately.
+
+* **Target node**:
+ By default, the operation will be executed on the server node that is defined by the
+ [client configuration](../../../client-api/configuration/load-balance/overview.mdx#client-logic-for-choosing-a-node).
+
+* **Target database**:
+ The database to compact is specified in `CompactSettings` (see examples below).
+ An exception is thrown if the specified database doesn't exist on the server node.
+
+* In this page:
+ * [Examples](../../../client-api/operations/server-wide/compact-database.mdx#examples):
+ * [Compact documents](../../../client-api/operations/server-wide/compact-database.mdx#examples)
+ * [Compact specific indexes](../../../client-api/operations/server-wide/compact-database.mdx#compact-specific-indexes)
+ * [Compact all indexes](../../../client-api/operations/server-wide/compact-database.mdx#compact-all-indexes)
+ * [Compact on other nodes](../../../client-api/operations/server-wide/compact-database.mdx#compact-on-other-nodes)
+ * [Compaction triggers compression](../../../client-api/operations/server-wide/compact-database.mdx#compaction-triggers-compression)
+ * [Compact from Studio](../../../client-api/operations/server-wide/compact-database.mdx#compact-from-studio)
+ * [Syntax](../../../client-api/operations/server-wide/compact-database.mdx#syntax)
+
+
+## Examples
+
+#### Compact documents:
+
+The following example will compact only **documents** for the specified database.
+
+
+
+{`// Define the compact settings
+$settings = new CompactSettings();
+$settings->setDatabaseName("Northwind");
+// Set 'Documents' to true to compact all documents in database
+// Indexes are not set and will not be compacted
+$settings->setDocuments(true);
+
+
+// Define the compact operation, pass the settings
+/** @var OperationIdResult $compactOp */
+$compactOp = new CompactDatabaseOperation($settings);
+
+// Execute compaction by passing the operation to Maintenance.Server.Send
+/** @var Operation $operation */
+$operation = $documentStore->maintenance()->server()->send($compactOp);
+
+// Wait for operation to complete, during compaction the database is offline
+$operation->waitForCompletion();
+`}
+
+
+#### Compact specific indexes:
+
+The following example will compact only specific indexes.
+
+
+
+{`// Define the compact settings
+$settings = new CompactSettings();
+
+// Database to compact
+$settings->setDatabaseName("Northwind");
+
+// Setting 'Documents' to false will compact only the specified indexes
+$settings->setDocuments(false);
+
+// Specify which indexes to compact
+$settings->setIndexes([ "Orders/Totals", "Orders/ByCompany" ]);
+
+// Optimize indexes is Lucene's feature to gain disk space and efficiency
+// Set whether to skip this optimization when compacting the indexes
+$settings->setSkipOptimizeIndexes(false);
+
+
+// Define the compact operation, pass the settings
+/** @var OperationIdResult $compactOp */
+$compactOp = new CompactDatabaseOperation($settings);
+
+// Execute compaction by passing the operation to Maintenance.Server.Send
+/** @var Operation $operation */
+$operation = $documentStore->maintenance()->server()->send($compactOp);
+// Wait for operation to complete
+$operation->waitForCompletion();
+`}
+
+
+#### Compact all indexes:
+
+The following example will compact all indexes and documents.
+
+
+
+{`// Get all indexes names in the database using the 'GetIndexNamesOperation' operation
+// Use 'ForDatabase' if the target database is different than the default database defined on the store
+/** @var StringArrayResult $allIndexNames */
+$allIndexNames = $documentStore->maintenance()->forDatabase("Northwind")
+ ->send(new GetIndexNamesOperation(0, PhpClient::INT_MAX_VALUE));
+
+// Define the compact settings
+$settings = new CompactSettings();
+$settings->setDatabaseName("Northwind"); // Database to compact
+$settings->setDocuments(true); // Compact all documents
+$settings->setIndexes($allIndexNames->getArrayCopy()); // All indexes will be compacted
+$settings->setSkipOptimizeIndexes(true); // Skip Lucene's indexes optimization
+
+// Define the compact operation, pass the settings
+/** @var OperationIdResult $compactOp */
+$compactOp = new CompactDatabaseOperation($settings);
+
+// Execute compaction by passing the operation to Maintenance.Server.Send
+/** @var Operation $operation */
+$operation = $documentStore->maintenance()->server()->send($compactOp);
+
+// Wait for operation to complete
+$operation->waitForCompletion();
+`}
+
+
+#### Compact on other nodes:
+
+* By default, an operation executes on the server node that is defined by the [client configuration](../../../client-api/configuration/load-balance/overview.mdx#client-logic-for-choosing-a-node).
+* The following example will compact the database on all [member](../../../server/clustering/rachis/cluster-topology.mdx#nodes-states-and-types) nodes from its database-group topology.
+ `forNode` is used to execute the operation on a specific node.
+
+
+
+{`// Get all member nodes in the database-group using the 'GetDatabaseRecordOperation' operation
+/** @var DatabaseRecordWithEtag $databaseRecord */
+$databaseRecord = $documentStore->maintenance()->server()->send(new GetDatabaseRecordOperation("Northwind"));
+
+$allMemberNodes = $databaseRecord->getTopology()->getMembers();
+
+// Define the compact settings as needed
+$settings = new CompactSettings();
+
+$settings->setDatabaseName("Northwind");
+$settings->setDocuments(true); //Compact all documents in database
+
+// Execute the compact operation on each member node
+foreach ($allMemberNodes as $nodeTag) \{
+ // Define the compact operation, pass the settings
+ /** @var OperationIdResult $compactOp */
+ $compactOp = new CompactDatabaseOperation($settings);
+
+ // Execute the operation on a specific node
+ // Use \`ForNode\` to specify the node to operate on
+ /** @var Operation $operation */
+ $operation = $documentStore->maintenance()->server()->forNode($nodeTag)->send($compactOp);
+ // Wait for operation to complete
+ $operation->waitForCompletion();
+\}
+`}
+
+
+
+
+
+## Compaction triggers compression
+
+* When document [compression](../../../server/storage/documents-compression.mdx) is turned on, compression is applied to the documents when:
+ * **New** documents that are created and saved.
+ * **Existing** documents that are modified and saved.
+
+* You can use the [compaction](../../../client-api/operations/server-wide/compact-database.mdx) operation
+ to **compress existing documents without having to modify and save** them.
+ Executing compaction triggers compression on ALL existing documents for the collections that are configured for compression.
+
+* Learn more about Compression -vs- Compaction [here](../../../server/storage/documents-compression.mdx#compression--vs--compaction).
+
+
+
+## Compact from Studio
+
+* Compaction can be triggered from the [Storage Report](../../../studio/database/stats/storage-report.mdx) view in the Studio.
+ The operation will compact the database only on the node being viewed (node info is in the Studio footer).
+
+* To compact the database on another node,
+ simply trigger compaction from the Storage Report view in a browser tab opened for that other node.
+
+
+
+## Syntax
+
+
+
+{`public CompactDatabaseOperation(CompactSettings compactSettings)
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **$compactSettings** | `?CompactSettings` | Settings for the compact operation |
+
+| `$compactSettings` class parameters | Type | Description |
+| - | - | - |
+| **$databaseName** | `?string` | Name of database to compact. Mandatory param. |
+| **$documents** | `bool` | Indicates if documents should be compacted. Optional param. |
+| **$indexes** | `?StringArray` | List of index names to compact. Optional param. |
+| **$skipOptimizeIndexes** | `bool` | `true` - Skip Lucene's index optimization while compacting `false` - Lucene's index optimization will take place while compacting |
+| | | **Note**: Either **$documents** or **$indexes** (or both) must be specified |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/_compact-database-python.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/_compact-database-python.mdx
new file mode 100644
index 0000000000..0317aec27a
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/_compact-database-python.mdx
@@ -0,0 +1,149 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use the `CompactDatabaseOperation` compaction operation to **removes empty gaps on disk**
+ that still occupy space after deletes.
+ You can choose whether to compact _documents_ and/or _selected indexes_.
+
+* **During compaction the database will be offline**.
+ The operation is a executed asynchronously as a background operation and can be waited for
+ using `wait_for_completion`.
+
+* The operation will **compact the database on one node**.
+ To compact all database-group nodes, the command must be sent to each node separately.
+
+* **Target node**:
+ By default, the operation will be executed on the server node that is defined by the
+ [client configuration](../../../client-api/configuration/load-balance/overview.mdx#client-logic-for-choosing-a-node).
+
+* **Target database**:
+ The database to compact is specified in `CompactSettings` (see examples below).
+ An exception is thrown if the specified database doesn't exist on the server node.
+
+* In this page:
+ * [Examples](../../../client-api/operations/server-wide/compact-database.mdx#examples):
+ * [Compact documents](../../../client-api/operations/server-wide/compact-database.mdx#examples)
+ * [Compact specific indexes](../../../client-api/operations/server-wide/compact-database.mdx#compact-specific-indexes)
+ * [Compact all indexes](../../../client-api/operations/server-wide/compact-database.mdx#compact-all-indexes)
+ * [Compaction triggers compression](../../../client-api/operations/server-wide/compact-database.mdx#compaction-triggers-compression)
+ * [Compact from Studio](../../../client-api/operations/server-wide/compact-database.mdx#compact-from-studio)
+
+
+## Examples
+
+#### Compact documents:
+
+The following example will compact only **documents** for the specified database.
+
+
+
+{`# Define the compact settings
+settings = CompactSettings(
+ # Database to compact
+ "Northwind",
+ # Set 'documents' to True to compact all documents in database
+ # Indexes are not set and will not be compacted
+ documents=True,
+)
+
+# Define the compact operation, pass the settings
+compact_op = CompactDatabaseOperation(settings)
+
+# Execute compaction by passing the operation to maintenance.server.send
+operation = store.maintenance.server.send_async(compact_op)
+
+# Wait for operation to complete, during compaction the database is offline
+operation.wait_for_completion()
+`}
+
+
+#### Compact specific indexes:
+
+The following example will compact only specific indexes.
+
+
+
+{`# Define the compact settings
+settings = CompactSettings(
+ # Database to compact
+ database_name="Northwind",
+ # Setting 'documents' to False will compact only the specified indexes
+ documents=False,
+ # Specify which indexes to compact
+ indexes=["Orders/Totals", "Orders/ByCompany"],
+ # Optimize indexes is Lucene's feature to gain disk space and efficiency
+ # Set whether to skip this optimization when compacting the indexes
+ skip_optimize_indexes=False,
+)
+# Define the compact operation, pass the settings
+compact_op = CompactDatabaseOperation(settings)
+
+# Execute compaction by passing the operation to maintenance.server.send
+operation = store.maintenance.server.send_async(compact_op)
+# Wait for operation to complete
+operation.wait_for_completion()
+`}
+
+
+#### Compact all indexes:
+
+The following example will compact all indexes and documents.
+
+
+
+{`# Get all indexes names in the database using the 'GetIndexNamesOperation' operation
+# Use 'ForDatabase' if the target database is different from the default database defined on the store
+all_indexes_names = store.maintenance.for_database("Northwind").send(GetIndexNamesOperation(0, int_max))
+
+# Define the compact settings
+settings = CompactSettings(
+ database_name="Northwind", # Database to compact
+ documents=True, # Compact all documents
+ indexes=all_indexes_names, # All indexes will be compacted
+ skip_optimize_indexes=True, # Skip Lucene's indexes optimization
+)
+
+# Define the compact operation, pass the settings
+compact_op = CompactDatabaseOperation(settings)
+
+# Execute compaction by passing the operation to maintenance.server.send
+operation = store.maintenance.server.send(compact_op)
+# Wait for operation to complete
+operation.wait_for_completion()
+`}
+
+
+
+
+
+## Compaction triggers compression
+
+* When document [compression](../../../server/storage/documents-compression.mdx) is turned on, compression is applied to the documents when:
+ * **New** documents that are created and saved.
+ * **Existing** documents that are modified and saved.
+
+* You can use the [compaction](../../../client-api/operations/server-wide/compact-database.mdx) operation
+ to **compress existing documents without having to modify and save** them.
+ Executing compaction triggers compression on ALL existing documents for the collections that are configured for compression.
+
+* Learn more about Compression -vs- Compaction [here](../../../server/storage/documents-compression.mdx#compression--vs--compaction).
+
+
+
+## Compact from Studio
+
+* Compaction can be triggered from the [Storage Report](../../../studio/database/stats/storage-report.mdx) view in the Studio.
+ The operation will compact the database only on the node being viewed (node info is in the Studio footer).
+
+* To compact the database on another node,
+ simply trigger compaction from the Storage Report view in a browser tab opened for that other node.
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/_create-database-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/_create-database-csharp.mdx
new file mode 100644
index 0000000000..0fe8e76f04
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/_create-database-csharp.mdx
@@ -0,0 +1,453 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `CreateDatabaseOperation` to create a new database from the **Client API**, as described below.
+ To create a new database from the **Studio**, see [Create database](../../../studio/database/create-new-database/general-flow.mdx).
+
+* This operation requires a client certificate with a security clearance of _Operator_ or _ClusterAdmin_.
+ To learn which operations are allowed at each level, see [Security clearance and permissions](../../../server/security/authorization/security-clearance-and-permissions.mdx).
+
+* In this article:
+ * [Create new database](../../../client-api/operations/server-wide/create-database.mdx#create-new-database)
+ * [Example I - Create non-sharded database](../../../client-api/operations/server-wide/create-database.mdx#example-i---create-non-sharded-database)
+ * [Example II - Create sharded database](../../../client-api/operations/server-wide/create-database.mdx#example-ii---create-sharded-database)
+ * [Example III - Ensure database does not exist before creating](../../../client-api/operations/server-wide/create-database.mdx#example-iii---ensure-database-does-not-exist-before-creating)
+ * [Syntax](../../../client-api/operations/server-wide/create-database.mdx#syntax)
+
+
+## Create new database
+
+
+
+##### Example I - Create non-sharded database
+* The following simple example creates a non-sharded database with the default replication factor of 1.
+
+
+
+
+{`// Define the create database operation, pass an instance of DatabaseRecord
+var createDatabaseOp = new CreateDatabaseOperation(new DatabaseRecord("DatabaseName"));
+
+// Execute the operation by passing it to Maintenance.Server.Send
+store.Maintenance.Server.Send(createDatabaseOp);
+`}
+
+
+
+
+{`// Define the create database operation, pass an instance of DatabaseRecord
+var createDatabaseOp = new CreateDatabaseOperation(new DatabaseRecord("DatabaseName"));
+
+// Execute the operation by passing it to Maintenance.Server.SendAsync
+await store.Maintenance.Server.SendAsync(createDatabaseOp);
+`}
+
+
+
+
+{`// Define the create database operation
+var createDatabaseOp = new CreateDatabaseOperation(builder => builder
+ // Call 'Regular' to create a non-sharded database
+ .Regular("DatabaseName"));
+
+// Execute the operation by passing it to Maintenance.Server.Send
+store.Maintenance.Server.Send(createDatabaseOp);
+`}
+
+
+
+
+{`// Define the create database operation
+var createDatabaseOp = new CreateDatabaseOperation(builder => builder
+ // Call 'Regular' to create a non-sharded database
+ .Regular("DatabaseName"));
+
+// Execute the operation by passing it to Maintenance.Server.SendAsync
+await store.Maintenance.Server.SendAsync(createDatabaseOp);
+`}
+
+
+
+
+
+
+
+##### Example II - Create sharded database
+* The following example creates a sharded database with 3 shards, each with a replication factor of 2.
+* In addition, it enables:
+ * revisions
+ * document expiration
+ * and applies some settings to the database.
+
+
+
+
+{`// Define the database record:
+var databaseRecord = new DatabaseRecord("ShardedDatabaseName") {
+
+ // Configure sharding:
+ Sharding = new ShardingConfiguration()
+ {
+ // Ensure nodes "A", "B", and "C" are available in the cluster
+ // before executing the database creation.
+ Shards = new Dictionary()
+ {
+ {0, new DatabaseTopology { Members = new List { "A", "B" }}},
+ {1, new DatabaseTopology { Members = new List { "A", "C" }}},
+ {2, new DatabaseTopology { Members = new List { "B", "C" }}}
+ }
+ },
+
+ // Enable revisions on all collections:
+ Revisions = new RevisionsConfiguration()
+ {
+ Default = new RevisionsCollectionConfiguration()
+ {
+ Disabled = false, MinimumRevisionsToKeep = 5
+ }
+ },
+
+ // Enable the document expiration feature:
+ Expiration = new ExpirationConfiguration()
+ {
+ Disabled = false
+ },
+
+ // Apply some database configuration setting:
+ Settings = new Dictionary()
+ {
+ {"Databases.QueryTimeoutInSec", "500"}
+ }
+};
+
+// Define the create database operation
+var createDatabaseOp = new CreateDatabaseOperation(databaseRecord);
+
+// Execute the operation by passing it to Maintenance.Server.Send
+store.Maintenance.Server.Send(createDatabaseOp);
+`}
+
+
+
+
+{`// Define the database record:
+var databaseRecord = new DatabaseRecord("ShardedDatabaseName") {
+
+ // Configure sharding:
+ Sharding = new ShardingConfiguration()
+ {
+ // Ensure nodes "A", "B", and "C" are available in the cluster
+ // before executing the database creation.
+ Shards = new Dictionary()
+ {
+ {0, new DatabaseTopology { Members = new List { "A", "B" }}},
+ {1, new DatabaseTopology { Members = new List { "A", "C" }}},
+ {2, new DatabaseTopology { Members = new List { "B", "C" }}}
+ }
+ },
+
+ // Enable revisions on all collections:
+ Revisions = new RevisionsConfiguration()
+ {
+ Default = new RevisionsCollectionConfiguration()
+ {
+ Disabled = false, MinimumRevisionsToKeep = 5
+ }
+ },
+
+ // Enable the document expiration feature:
+ Expiration = new ExpirationConfiguration()
+ {
+ Disabled = false
+ },
+
+ // Apply some database configuration setting:
+ Settings = new Dictionary()
+ {
+ {"Databases.QueryTimeoutInSec", "500"}
+ }
+};
+
+// Define the create database operation
+var createDatabaseOp = new CreateDatabaseOperation(databaseRecord);
+
+// Execute the operation by passing it to Maintenance.Server.SendAsync
+await store.Maintenance.Server.SendAsync(createDatabaseOp);
+`}
+
+
+
+
+{`// Define the create database operation
+var createDatabaseOp = new CreateDatabaseOperation(builder => builder
+
+ // Call 'Sharded' to create a sharded database
+ .Sharded("ShardedDatabaseName", topology => topology
+ // Ensure nodes "A", "B", and "C" are available in the cluster
+ // before executing the database creation.
+ .AddShard(0, new DatabaseTopology {Members = new List {"A", "B"}})
+ .AddShard(1, new DatabaseTopology {Members = new List {"A", "C"}})
+ .AddShard(2, new DatabaseTopology {Members = new List {"B", "C"}}))
+ // Enable revisions on all collections:
+ .ConfigureRevisions(new RevisionsConfiguration()
+ {
+ Default = new RevisionsCollectionConfiguration()
+ {
+ Disabled = false, MinimumRevisionsToKeep = 5
+ }
+ })
+ // Enable the document expiration feature:
+ .ConfigureExpiration(new ExpirationConfiguration()
+ {
+ Disabled = false
+ })
+ // Apply some database configuration setting:
+ .WithSettings(new Dictionary()
+ {
+ { "Databases.QueryTimeoutInSec", "500" }
+ })
+);
+
+// Execute the operation by passing it to Maintenance.Server.Send
+store.Maintenance.Server.Send(createDatabaseOp);
+`}
+
+
+
+
+{`// Define the create database operation
+var createDatabaseOp = new CreateDatabaseOperation(builder => builder
+
+ // Call 'Sharded' to create a sharded database
+ .Sharded("ShardedDatabaseName", topology => topology
+ // Ensure nodes "A", "B", and "C" are available in the cluster
+ // before executing the database creation.
+ .AddShard(0, new DatabaseTopology {Members = new List {"A", "B"}})
+ .AddShard(1, new DatabaseTopology {Members = new List {"A", "C"}})
+ .AddShard(2, new DatabaseTopology {Members = new List {"B", "C"}}))
+ // Enable revisions on all collections:
+ .ConfigureRevisions(new RevisionsConfiguration()
+ {
+ Default = new RevisionsCollectionConfiguration()
+ {
+ Disabled = false, MinimumRevisionsToKeep = 5
+ }
+ })
+ // Enable the document expiration feature:
+ .ConfigureExpiration(new ExpirationConfiguration()
+ {
+ Disabled = false
+ })
+ // Apply some database configuration setting:
+ .WithSettings(new Dictionary()
+ {
+ { "Databases.QueryTimeoutInSec", "500" }
+ })
+);
+
+// Execute the operation by passing it to Maintenance.Server.SendAsync
+await store.Maintenance.Server.SendAsync(createDatabaseOp);
+`}
+
+
+
+
+
+
+
+##### Example III - Ensure database does not exist before creating
+* To ensure the database does not already exist before creating it, follow this example:
+
+
+
+
+{`var databaseName = "MyDatabaseName";
+
+try
+{
+ // Try to fetch database statistics to check if the database exists
+ store.Maintenance.ForDatabase(databaseName)
+ .Send(new GetStatisticsOperation());
+}
+catch (DatabaseDoesNotExistException)
+{
+ try
+ {
+ // The database does not exist, try to create:
+ var createDatabaseOp = new CreateDatabaseOperation(
+ new DatabaseRecord(databaseName));
+
+ store.Maintenance.Server.Send(createDatabaseOp);
+ }
+ catch (ConcurrencyException)
+ {
+ // The database was created by another client before this call completed
+ }
+}
+`}
+
+
+
+
+{`var databaseName = "MyDatabaseName";
+
+try
+{
+ // Try to fetch database statistics to check if the database exists:
+ await store.Maintenance.ForDatabase(databaseName)
+ .SendAsync(new GetStatisticsOperation());
+}
+catch (DatabaseDoesNotExistException)
+{
+ try
+ {
+ // The database does not exist, try to create:
+ var createDatabaseOp = new CreateDatabaseOperation(
+ new DatabaseRecord(databaseName));
+
+ await store.Maintenance.Server.SendAsync(createDatabaseOp);
+ }
+ catch (ConcurrencyException)
+ {
+ // The database was created by another client before this call completed
+ }
+}
+`}
+
+
+
+
+
+
+
+## Syntax
+
+
+
+{`// CreateDatabaseOperation overloads:
+// ==================================
+public CreateDatabaseOperation(DatabaseRecord databaseRecord) \{\}
+public CreateDatabaseOperation(DatabaseRecord databaseRecord, int replicationFactor) \{\}
+public CreateDatabaseOperation(Action builder) \{\}
+`}
+
+
+
+| Parameter | Description |
+|-----------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **databaseRecord** | Instance of `DatabaseRecord` containing database configuration. See [The Database Record](../../../client-api/operations/server-wide/create-database.mdx#the-database-record) below. |
+| **replicationFactor** | Number of nodes the database should be replicated to.
If not specified, the value is taken from `databaseRecord.Topology.ReplicationFactor`, or defaults to **`1`** if that is not set.
If `Topology` is provided, the `replicationFactor` is ignored. |
+| **builder** | Callback used to initialize and fluently configure a new DatabaseRecord. Receives an `IDatabaseRecordBuilderInitializer` on which you invoke builder methods to construct the record. See [The Database Record Builder](../../../client-api/operations/server-wide/create-database.mdx#the-database-record-builder) below. |
+### The Database Record:
+
+The `DatabaseRecord` is a collection of database configurations:
+
+| DatabaseRecord constructors | Description |
+|---------------------------------------|--------------------------------------------------------------------|
+| DatabaseRecord() | Initialize a new database record. |
+| DatabaseRecord(`string` databaseName) | Initialize a new database record with the specified database name. |
+
+
+
+**Note:**
+
+* Only the properties listed in the table below can be configured in the `DatabaseRecord` object passed to `CreateDatabaseOperation`.
+* For example, although ongoing task definitions are public on the _DatabaseRecord_ class, setting them during database creation will result in an exception.
+ To define ongoing tasks (e.g., backups, ETL, replication), use the appropriate dedicated operation after the database has been created.
+
+
+
+| DatabaseRecord properties | Type | Description |
+|------------------------------------|-----------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **AiConnectionStrings** | `Dictionary` | Define [Ai Connection Strings](../../../ai-integration/connection-strings/connection-strings-overview.mdx), keyed by name. |
+| **Analyzers** | `Dictionary` | A dictionary defining the [Custom Analyzers](../../../indexes/using-analyzers.mdx#creating-custom-analyzers) available to the database. |
+| **AutoIndexes** | `Dictionary` | Auto-index definitions for the database. |
+| **Client** | `ClientConfiguration` | [Client behavior](../../../studio/server/client-configuration.mdx) configuration. |
+| **ConflictSolverConfig** | `ConflictSolver` | Define the strategy used to resolve [Replication conflicts](../../../server/clustering/replication/replication-conflicts.mdx). |
+| **DataArchival** | `DataArchivalConfiguration` | [Data Archival](../../../data-archival/overview.mdx) configuration for the database. |
+| **DatabaseName** | `string` | The database name. |
+| **Disabled** | `bool` | Set the database initial state. `true` - disable the database. `false` - (default) the database will be enabled.
This can be modified later via [ToggleDatabasesStateOperation](../../../client-api/operations/server-wide/toggle-databases-state.mdx). |
+| **DocumentsCompression** | `DocumentsCompressionConfiguration` | Configuration settings for [Compressing documents](../../../server/storage/documents-compression.mdx). |
+| **ElasticSearchConnectionStrings** | `Dictionary` | Define [ElasticSearch Connection Strings](../../../client-api/operations/maintenance/connection-strings/add-connection-string.mdx#add-an-elasticsearch-connection-string), keyed by name. |
+| **Encrypted** | `bool` | `true` - create an [Encrypted database](../../../server/security/encryption/database-encryption.mdx).
Note: Use `PutSecretKeyCommand` to send your secret key to the server BEFORE creating the database.
`false` - (default) the database will not be encrypted. |
+| **Expiration** | `ExpirationConfiguration` | [Expiration](../../../server/extensions/expiration.mdx) configuration for the database. |
+| **Indexes** | `Dictionary` | Define [Indexes](../../../client-api/operations/maintenance/indexes/put-indexes.mdx) that will be created with the database - no separate deployment needed. |
+| **Integrations** | `IntegrationConfigurations` | Configuration for [Integrations](../../../integrations/postgresql-protocol/overview.mdx), e.g. `PostgreSqlConfiguration`. |
+| **LockMode** | `DatabaseLockMode` | Set the database lock mode. (default: `Unlock`)
This can be modified later via `SetDatabasesLockOperation`. |
+| **OlapConnectionStrings** | `Dictionary` | Define [OLAP Connection Strings](../../../client-api/operations/maintenance/connection-strings/add-connection-string.mdx#add-an-olap-connection-string), keyed by name. |
+| **QueueConnectionStrings** | `Dictionary` | Define [Queue Connection Strings](../../../server/ongoing-tasks/etl/queue-etl/overview.mdx), keyed by name. |
+| **RavenConnectionStrings** | `Dictionary` | Define [Raven Connection Strings](../../../client-api/operations/maintenance/connection-strings/add-connection-string.mdx#add-a-ravendb-connection-string), keyed by name. |
+| **Refresh** | `RefreshConfiguration` | [Refresh](../../../server/extensions/refresh.mdx) configuration for the database. |
+| **Revisions** | `RevisionsConfiguration` | [Revisions](../../../document-extensions/revisions/client-api/operations/configure-revisions.mdx) configuration for the database. |
+| **RevisionsBin** | `RevisionsBinConfiguration` | Configuration for the [Revisions Bin Cleaner](../../../document-extensions/revisions/revisions-bin-cleaner.mdx). |
+| **RevisionsForConflicts** | `RevisionsCollectionConfiguration` | Set the revisions configuration for conflicting documents. |
+| **RollingIndexes** | `Dictionary` | Dictionary mapping index names to their deployment configurations. |
+| **Settings** | `Dictionary` | [Configuration](../../../server/configuration/configuration-options.mdx) settings for the database. |
+| **Sharding** | `ShardingConfiguration` | The sharding configuration. |
+| **SnowflakeConnectionStrings** | `Dictionary` | Define [Snowflake Connection Strings](../../../client-api/operations/maintenance/connection-strings/add-connection-string.mdx#add-a-snowflake-connection-string), keyed by name. |
+| **Sorters** | `Dictionary` | A dictionary defining the [Custom Sorters](../../../studio/database/settings/custom-sorters.mdx) available to the database. |
+| **SqlConnectionStrings** | `Dictionary` | Define [SQL Connection Strings](../../../client-api/operations/maintenance/connection-strings/add-connection-string.mdx#add-an-sql-connection-string), keyed by name. |
+| **Studio** | `StudioConfiguration` | [Studio Configuration](../../../studio/database/settings/studio-configuration.mdx). |
+| **TimeSeries** | `TimeSeriesConfiguration` | [Time series](../../../studio/database/settings/time-series-settings.mdx) configuration for the database. |
+| **Topology** | `DatabaseTopology` | Optional topology configuration.
Defaults to `null`, in which case the server will determine which nodes to place the database on, based on the specified `ReplicationFactor`. |
+| **UnusedDatabaseIds** | `HashSet` | Set database IDs that will be excluded when creating new change vectors. |
+### The Database Record Builder:
+
+
+
+{`public interface IDatabaseRecordBuilderInitializer
+\{
+ public IDatabaseRecordBuilder Regular(string databaseName);
+ public IShardedDatabaseRecordBuilder Sharded(string databaseName, Action builder);
+ public DatabaseRecord ToDatabaseRecord();
+\}
+
+public interface IShardedDatabaseRecordBuilder : IDatabaseRecordBuilderBase
+\{
+\}
+
+// Available configurations:
+// =========================
+
+public interface IDatabaseRecordBuilder : IDatabaseRecordBuilderBase
+\{
+ public IDatabaseRecordBuilderBase WithTopology(DatabaseTopology topology);
+ public IDatabaseRecordBuilderBase WithTopology(Action builder);
+ public IDatabaseRecordBuilderBase WithReplicationFactor(int replicationFactor);
+\}
+
+public interface IDatabaseRecordBuilderBase
+\{
+ DatabaseRecord ToDatabaseRecord();
+
+ IDatabaseRecordBuilderBase ConfigureClient(ClientConfiguration configuration);
+ IDatabaseRecordBuilderBase ConfigureDocumentsCompression(DocumentsCompressionConfiguration configuration);
+ IDatabaseRecordBuilderBase ConfigureExpiration(ExpirationConfiguration configuration);
+ IDatabaseRecordBuilderBase ConfigureRefresh(RefreshConfiguration configuration);
+ IDatabaseRecordBuilderBase ConfigureRevisions(RevisionsConfiguration configuration);
+ IDatabaseRecordBuilderBase ConfigureStudio(StudioConfiguration configuration);
+ IDatabaseRecordBuilderBase ConfigureTimeSeries(TimeSeriesConfiguration configuration);
+
+ IDatabaseRecordBuilderBase Disabled();
+ IDatabaseRecordBuilderBase Encrypted();
+
+ IDatabaseRecordBuilderBase WithAnalyzers(params AnalyzerDefinition[] analyzerDefinitions);
+ IDatabaseRecordBuilderBase WithConnectionStrings(Action builder);
+ IDatabaseRecordBuilderBase WithIndexes(params IndexDefinition[] indexDefinitions);
+ IDatabaseRecordBuilderBase WithIntegrations(Action builder);
+ IDatabaseRecordBuilderBase WithLockMode(DatabaseLockMode lockMode);
+ IDatabaseRecordBuilderBase WithSettings(Dictionary settings);
+ IDatabaseRecordBuilderBase WithSettings(Action> builder);
+ IDatabaseRecordBuilderBase WithSorters(params SorterDefinition[] sorterDefinitions);
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/_create-database-java.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/_create-database-java.mdx
new file mode 100644
index 0000000000..26b0a191ef
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/_create-database-java.mdx
@@ -0,0 +1,143 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `CreateDatabaseOperation` to create a new database from the **Client API**, as described below.
+ To create a new database from the **Studio**, see [Create database](../../../studio/database/create-new-database/general-flow.mdx).
+
+* This operation requires a client certificate with a security clearance of _Operator_ or _ClusterAdmin_.
+ To learn which operations are allowed at each level, see [Security clearance and permissions](../../../server/security/authorization/security-clearance-and-permissions.mdx).
+
+* In this article:
+ * [Create new database](../../../client-api/operations/server-wide/create-database.mdx#create-new-database)
+ * [Example I - Create database](../../../client-api/operations/server-wide/create-database.mdx#example-i---create-non-sharded-database)
+ * [Example II - Ensure database does not exist before creating](../../../client-api/operations/server-wide/create-database.mdx#example-ii---ensure-database-does-not-exist-before-creating)
+ * [Syntax](../../../client-api/operations/server-wide/create-database.mdx#syntax)
+
+
+## Create new database
+
+
+
+##### Example I - Create database
+* The following simple example creates a non-sharded database with the default replication factor of 1.
+
+
+
+{`DatabaseRecord databaseRecord = new DatabaseRecord();
+databaseRecord.setDatabaseName("MyNewDatabase");
+store.maintenance().server().send(new CreateDatabaseOperation(databaseRecord));
+`}
+
+
+
+
+
+
+##### Example II - Ensure database does not exist before creating
+* To ensure the database does not already exist before creating it, follow this example:
+
+
+
+{`public void ensureDatabaseExists(IDocumentStore store, String database, boolean createDatabaseIfNotExists) \{
+ database = ObjectUtils.firstNonNull(database, store.getDatabase());
+
+ if (StringUtils.isBlank(database)) \{
+ throw new IllegalArgumentException("Value cannot be null or whitespace");
+ \}
+
+ try \{
+ store.maintenance().forDatabase(database).send(new GetStatisticsOperation());
+ \} catch (DatabaseDoesNotExistException e) \{
+ if (!createDatabaseIfNotExists) \{
+ throw e;
+ \}
+
+ try \{
+ DatabaseRecord databaseRecord = new DatabaseRecord();
+ databaseRecord.setDatabaseName(database);
+ store.maintenance().server().send(new CreateDatabaseOperation(databaseRecord));
+ \} catch (ConcurrencyException ce) \{
+ // The database was already created before calling CreateDatabaseOperation
+ \}
+ \}
+\}
+`}
+
+
+
+
+
+
+## Syntax
+
+
+
+{`public CreateDatabaseOperation(DatabaseRecord databaseRecord)
+
+public CreateDatabaseOperation(DatabaseRecord databaseRecord, int replicationFactor)
+`}
+
+
+
+| Parameter | Type | Description |
+|-----------------------|----------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **databaseRecord** | DatabaseRecord | Instance of `DatabaseRecord` containing database configuration. |
+| **replicationFactor** | int | Number of nodes the database should be replicated to.
If not specified, the value is taken from `topology.replicationFactor`, or defaults to **`1`** if that is not set.
If `topology` is provided, the `replicationFactor` is ignored. |
+
+## DatabaseRecord
+
+`DatabaseRecord` is a collection of database configurations.
+
+| constructor | Description |
+|---------------------------------------|----------------------------------|
+| DatabaseRecord(`string` databaseName) | Initialize a new database record |
+
+
+
+**Note:**
+
+* Only the properties listed in the table below can be configured in the `DatabaseRecord` object passed to `CreateDatabaseOperation`.
+* For example, although ongoing task definitions are public on the _DatabaseRecord_ class, setting them during database creation will result in an exception.
+ To define ongoing tasks (e.g., backups, ETL, replication), use the appropriate dedicated operation after the database has been created.
+
+
+
+| Property | Type | Description |
+|------------------------------------|----------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **analyzers** | `Map` | A dictionary defining the [Custom Analyzers](../../../indexes/using-analyzers.mdx#creating-custom-analyzers) available to the database. |
+| **autoIndexes** | `Map` | Auto-index definitions for the database. |
+| **client** | `ClientConfiguration` | [Client behavior](../../../studio/server/client-configuration.mdx) configuration. |
+| **conflictSolverConfig** | `ConflictSolver` | Define the strategy used to resolve [Replication conflicts](../../../server/clustering/replication/replication-conflicts.mdx). |
+| **dataArchival** | `DataArchivalConfiguration` | [Data Archival](../../../data-archival/overview.mdx) configuration for the database. |
+| **databaseName** | `String` | The database name. |
+| **disabled** | `boolean` (default: false) | Set the database initial state. `true` - disable the database. `false` - (default) the database will be enabled.
This can be modified later via [ToggleDatabasesStateOperation](../../../client-api/operations/server-wide/toggle-databases-state.mdx). |
+| **documentsCompression** | `DocumentsCompressionConfiguration` | Configuration settings for [Compressing documents](../../../server/storage/documents-compression.mdx). |
+| **elasticSearchConnectionStrings** | `Map` | Define [ElasticSearch Connection Strings](../../../client-api/operations/maintenance/connection-strings/add-connection-string.mdx#add-an-elasticsearch-connection-string), keyed by name. |
+| **encrypted** | `boolean` (default: false) | `true` - create an [Encrypted database](../../../server/security/encryption/database-encryption.mdx).
Note: Use `PutSecretKeyCommand` to send your secret key to the server BEFORE creating the database.
`false` - (default) the database will not be encrypted. |
+| **expiration** | `ExpirationConfiguration` | [Expiration](../../../server/extensions/expiration.mdx) configuration for the database. |
+| **indexes** | `Map` | Define [Indexes](../../../client-api/operations/maintenance/indexes/put-indexes.mdx) that will be created with the database - no separate deployment needed. |
+| **integrations** | `IntegrationConfigurations` | Configuration for [Integrations](../../../integrations/postgresql-protocol/overview.mdx), e.g. `PostgreSqlConfiguration`. |
+| **lockMode** | `DatabaseLockMode` | Set the database lock mode. (default: `Unlock`)
This can be modified later via `SetDatabasesLockOperation`. |
+| **olapConnectionStrings** | `Map` | Define [OLAP Connection Strings](../../../client-api/operations/maintenance/connection-strings/add-connection-string.mdx#add-an-olap-connection-string), keyed by name. |
+| **queueConnectionStrings** | `Map` | Define [Queue Connection Strings](../../../server/ongoing-tasks/etl/queue-etl/overview.mdx), keyed by name. |
+| **ravenConnectionStrings** | `Map` | Define [Raven Connection Strings](../../../client-api/operations/maintenance/connection-strings/add-connection-string.mdx#add-a-ravendb-connection-string), keyed by name. |
+| **refresh** | `RefreshConfiguration` | [Refresh](../../../server/extensions/refresh.mdx) configuration for the database. |
+| **revisions** | `RevisionsConfiguration` | [Revisions](../../../document-extensions/revisions/client-api/operations/configure-revisions.mdx) configuration for the database. |
+| **revisionsForConflicts** | `RevisionsCollectionConfiguration` | Set the revisions configuration for conflicting documents. |
+| **rollingIndexes** | `Map` | Dictionary mapping index names to their deployment configurations. |
+| **settings** | `Map` | [Configuration](../../../server/configuration/configuration-options.mdx) settings for the database. |
+| **sharding** | `ShardingConfiguration` | The sharding configuration. |
+| **sorters** | `Map` | A dictionary defining the [Custom Sorters](../../../studio/database/settings/custom-sorters.mdx) available to the database. |
+| **sqlConnectionStrings** | `Map` | Define [SQL Connection Strings](../../../client-api/operations/maintenance/connection-strings/add-connection-string.mdx#add-an-sql-connection-string), keyed by name. |
+| **studio** | `StudioConfiguration` | [Studio Configuration](../../../studio/database/settings/studio-configuration.mdx). |
+| **timeSeries** | `TimeSeriesConfiguration` | [Time series](../../../studio/database/settings/time-series-settings.mdx) configuration for the database. |
+| **topology** | `DatabaseTopology` | Optional topology configuration.
Defaults to `null`, in which case the server will determine which nodes to place the database on, based on the specified `ReplicationFactor`. |
+| **unusedDatabaseIds** | `Set` | Set database IDs that will be excluded when creating new change vectors. |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/_delete-database-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/_delete-database-csharp.mdx
new file mode 100644
index 0000000000..fcb70cb43a
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/_delete-database-csharp.mdx
@@ -0,0 +1,94 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+This operation is used to delete databases from a server, with a possibility to remove all the data from hard drive.
+
+## Syntax
+
+
+
+{`public DeleteDatabasesOperation(
+ string databaseName,
+ bool hardDelete,
+ string fromNode = null,
+ TimeSpan? timeToWaitForConfirmation = null)
+\{
+\}
+
+public DeleteDatabasesOperation(DeleteDatabasesOperation.Parameters parameters)
+\{
+\}
+
+public class Parameters
+\{
+ public string[] DatabaseNames \{ get; set; \}
+
+ public bool HardDelete \{ get; set; \}
+
+ public string[] FromNodes \{ get; set; \}
+
+ public TimeSpan? TimeToWaitForConfirmation \{ get; set; \}
+\}
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **DatabaseName** | string | Name of a database to delete |
+| **HardDelete** | bool | Should all data be removed (data files, indexing files, etc.). |
+| **FromNode** | string | Remove the database just from a specific node. Default: `null` which would delete from all |
+| **TimeToWaitForConfirmation** | TimeSpan | Time to wait for confirmation. Default: `null` will user server default (15 seconds) |
+
+## Example I
+
+
+
+
+{`var parameters = new DeleteDatabasesOperation.Parameters
+{
+ DatabaseNames = new[] { "MyNewDatabase", "OtherDatabaseToDelete" },
+ HardDelete = true,
+ FromNodes = new[] { "A", "C" }, // optional
+ TimeToWaitForConfirmation = TimeSpan.FromSeconds(30) // optional
+};
+store.Maintenance.Server.Send(new DeleteDatabasesOperation(parameters));
+`}
+
+
+
+
+{`var parameters = new DeleteDatabasesOperation.Parameters
+{
+ DatabaseNames = new[] { "MyNewDatabase", "OtherDatabaseToDelete" },
+ HardDelete = true,
+ FromNodes = new[] { "A", "C" }, // optional
+ TimeToWaitForConfirmation = TimeSpan.FromSeconds(30) // optional
+};
+await store.Maintenance.Server.SendAsync(new DeleteDatabasesOperation(parameters));
+`}
+
+
+
+
+## Example II
+
+In order to delete just one database from a server, you can also use this simplified constructor
+
+
+
+
+{`store.Maintenance.Server.Send(new DeleteDatabasesOperation("MyNewDatabase", hardDelete: true, fromNode: null, timeToWaitForConfirmation: null));
+`}
+
+
+
+
+{`await store.Maintenance.Server.SendAsync(new DeleteDatabasesOperation("MyNewDatabase", hardDelete: true, fromNode: null, timeToWaitForConfirmation: null));
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/_delete-database-java.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/_delete-database-java.mdx
new file mode 100644
index 0000000000..dcff042968
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/_delete-database-java.mdx
@@ -0,0 +1,101 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+This operation is used to delete databases from a server, with a possibility to remove all the data from hard drive.
+
+## Syntax
+
+
+
+{`public DeleteDatabasesOperation(String databaseName, boolean hardDelete)
+
+public DeleteDatabasesOperation(String databaseName, boolean hardDelete, String fromNode)
+
+public DeleteDatabasesOperation(String databaseName, boolean hardDelete, String fromNode, Duration timeToWaitForConfirmation)
+
+public DeleteDatabasesOperation(Parameters parameters)
+`}
+
+
+
+
+
+{`public static class Parameters \{
+ private String[] databaseNames;
+ private boolean hardDelete;
+ private String[] fromNodes;
+ private Duration timeToWaitForConfirmation;
+
+ public String[] getDatabaseNames() \{
+ return databaseNames;
+ \}
+
+ public void setDatabaseNames(String[] databaseNames) \{
+ this.databaseNames = databaseNames;
+ \}
+
+ public boolean isHardDelete() \{
+ return hardDelete;
+ \}
+
+ public void setHardDelete(boolean hardDelete) \{
+ this.hardDelete = hardDelete;
+ \}
+
+ public String[] getFromNodes() \{
+ return fromNodes;
+ \}
+
+ public void setFromNodes(String[] fromNodes) \{
+ this.fromNodes = fromNodes;
+ \}
+
+ public Duration getTimeToWaitForConfirmation() \{
+ return timeToWaitForConfirmation;
+ \}
+
+ public void setTimeToWaitForConfirmation(Duration timeToWaitForConfirmation) \{
+ this.timeToWaitForConfirmation = timeToWaitForConfirmation;
+ \}
+\}
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **DatabaseName** | String | Name of a database to delete |
+| **HardDelete** | boolean | Should all data be removed (data files, indexing files, etc.). |
+| **FromNode** | String | Remove the database just from a specific node. Default: `null` which would delete from all |
+| **TimeToWaitForConfirmation** | Duration | Time to wait for confirmation. Default: `null` will user server default (15 seconds) |
+
+## Example I
+
+
+
+{`DeleteDatabasesOperation.Parameters parameters = new DeleteDatabasesOperation.Parameters();
+parameters.setDatabaseNames(new String[]\{ "MyNewDatabase", "OtherDatabaseToDelete" \});
+parameters.setHardDelete(true);
+parameters.setFromNodes(new String[]\{ "A", "C" \}); //optional
+parameters.setTimeToWaitForConfirmation(Duration.ofSeconds(30)); // optional
+
+store.maintenance()
+ .server().send(new DeleteDatabasesOperation(parameters));
+`}
+
+
+
+## Example II
+
+In order to delete just one database from a server, you can also use this constructor
+
+
+
+{`store.maintenance().server().send(
+ new DeleteDatabasesOperation("MyNewDatabase", true, null, null));
+`}
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/_get-build-number-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/_get-build-number-csharp.mdx
new file mode 100644
index 0000000000..db0757d35c
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/_get-build-number-csharp.mdx
@@ -0,0 +1,62 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+To get the server build number use **GetBuildNumberOperation** from `Maintenance.Server`
+
+## Syntax
+
+
+
+{`public GetBuildNumberOperation()
+`}
+
+
+
+### Return Value
+
+The result of executing GetBuildNumberOperation is a **BuildNumber** object:
+
+
+
+{`public class BuildNumber
+\{
+ public string ProductVersion \{ get; set; \}
+
+ public int BuildVersion \{ get; set; \}
+
+ public string CommitHash \{ get; set; \}
+
+ public string FullVersion \{ get; set; \}
+\}
+`}
+
+
+
+| Property | Description |
+|--------------------|---------------------------------------|
+| **ProductVersion** | current product version e.g. "4.0" |
+| **BuildVersion** | current build version e.g. 40 |
+| **CommitHash** | git commit SHA e.g. "a377982" |
+| **FullVersion** | semantic versioning e.g. "4.0.0" |
+
+## Example
+
+
+
+
+{`var getBuildNumberResult = documentStore.Maintenance.Server.Send(new GetBuildNumberOperation());
+Console.WriteLine(getBuildNumberResult.BuildVersion);
+`}
+
+
+
+
+{`var buildNumber = await documentStore.Maintenance.Server.SendAsync(new GetBuildNumberOperation());
+Console.WriteLine(buildNumber.BuildVersion);
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/_get-database-names-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/_get-database-names-csharp.mdx
new file mode 100644
index 0000000000..0a30c21809
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/_get-database-names-csharp.mdx
@@ -0,0 +1,35 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+To download available database names from a server, use the `GetDatabaseNames` operation.
+
+## Syntax
+
+
+
+{`public GetDatabaseNamesOperation(int start, int pageSize)
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **pageSize** | int | Maximum number of records that will be downloaded |
+| **start** | int | Number of records that should be skipped. |
+
+| Return Value | |
+| ------------- | ----- |
+| string[] | Names of databases on a server |
+
+## Example
+
+
+
+{`var operation = new GetDatabaseNamesOperation(0, 25);
+string[] databaseNames = store.Maintenance.Server.Send(operation);
+`}
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/_get-database-names-java.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/_get-database-names-java.mdx
new file mode 100644
index 0000000000..c76ea5f0b0
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/_get-database-names-java.mdx
@@ -0,0 +1,35 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+To download available database names from a server, use the `GetDatabaseNames` operation.
+
+## Syntax
+
+
+
+{`public GetDatabaseNamesOperation(int start, int pageSize)
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **pageSize** | int | Maximum number of records that will be downloaded |
+| **start** | int | Number of records that should be skipped. |
+
+| Return Value | |
+| ------------- | ----- |
+| String[] | Names of databases on a server |
+
+## Example
+
+
+
+{`GetDatabaseNamesOperation operation = new GetDatabaseNamesOperation(0, 25);
+String[] databaseNames = store.maintenance().server().send(operation);
+`}
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/_modify-conflict-solver-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/_modify-conflict-solver-csharp.mdx
new file mode 100644
index 0000000000..a4b0759fcc
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/_modify-conflict-solver-csharp.mdx
@@ -0,0 +1,81 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+The conflict solver allows you to set a conflict resolution script for each collection or resolve conflicts using the latest version.
+
+To modify the solver configuration, use **ModifyConflictSolverOperation**.
+
+## Syntax
+
+
+
+{`public ModifyConflictSolverOperation(
+ string database,
+ Dictionary collectionByScript = null,
+ bool resolveToLatest = false)
+`}
+
+
+
+
+
+{`public class ScriptResolver
+\{
+ public string Script \{ get; set; \}
+\}
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **database** | string | Name of a database |
+| **collectionByScript** | Dictionary<string,ScriptResolver> | Per collection conflict resolution script |
+| **resolveToLatest** | bool | Indicates if a conflict should be resolved using the latest version |
+
+
+| Return Value | |
+| ------------- | ----- |
+| **Key** | Name of database |
+| **RaftCommandIndex** | RAFT command index |
+| **Solver** | Saved conflict solver configuration |
+
+## Example I
+
+
+
+{`// resolve conflict to latest version
+ModifyConflictSolverOperation operation =
+ new ModifyConflictSolverOperation("Northwind", null, resolveToLatest: true);
+store.Maintenance.Server.Send(operation);
+`}
+
+
+
+
+## Example II
+
+
+
+{`// resolve conflict by finding max value
+string script = @"
+var maxRecord = 0;
+for (var i = 0; i < docs.length; i++) \{
+ maxRecord = Math.max(docs[i].maxRecord, maxRecord);
+\}
+docs[0].MaxRecord = maxRecord;
+
+return docs[0];";
+
+ModifyConflictSolverOperation operation =
+ new ModifyConflictSolverOperation("Northwind", new Dictionary
+ \{
+ \{ "Orders", new ScriptResolver \{ Script = script\} \}
+ \}, resolveToLatest: false);
+store.Maintenance.Server.Send(operation);
+`}
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/_promote-database-node-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/_promote-database-node-csharp.mdx
new file mode 100644
index 0000000000..fe8d82fc4c
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/_promote-database-node-csharp.mdx
@@ -0,0 +1,33 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+This operation is used to promote a database node. After promotion, the node is considered as a `Member`.
+
+## Syntax
+
+
+
+{`public PromoteDatabaseNodeOperation(string databaseName, string node)
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **databaseName** | string | Name of a database |
+| **node** | string | Node tag to promote into database group `Member` |
+
+## Example
+
+
+
+{`PromoteDatabaseNodeOperation promoteOperation = new PromoteDatabaseNodeOperation("Northwind", "C");
+store.Maintenance.Server.Send(promoteOperation);
+`}
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/_reorder-database-members-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/_reorder-database-members-csharp.mdx
new file mode 100644
index 0000000000..94189cfb4d
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/_reorder-database-members-csharp.mdx
@@ -0,0 +1,53 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+**ReorderDatabaseMembersOperation** allows you to change the order of nodes in the [Database Group Topology](../../../studio/database/settings/manage-database-group.mdx).
+
+## Syntax
+
+
+
+{`public ReorderDatabaseMembersOperation(string database, List order)
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **database** | string | Name of a database to operate on |
+| **order** | List\<string> | List of node tags of all existing nodes in the database group, listed in the exact order that you wish to have. Throws `ArgumentException` is the reordered list doesn't correspond to the existing nodes of the database group |
+
+
+## Example I
+
+
+
+{`// Assume that the current order of database group nodes is : ["A", "B", "C"]
+
+// Change the order of database group nodes to : ["C", "A", "B"]
+
+store.Maintenance.Server.Send(new ReorderDatabaseMembersOperation("Northwind",
+ new List
+ \{
+ "C", "A", "B"
+ \}));
+`}
+
+
+
+## Example II
+
+
+
+{`// Get the current DatabaseTopology from database record
+var topology = store.Maintenance.Server.Send(new GetDatabaseRecordOperation("Northwind")).Topology;
+
+// Reverse the order of database group nodes
+topology.Members.Reverse();
+store.Maintenance.Server.Send(new ReorderDatabaseMembersOperation("Northwind", topology.Members));
+`}
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/_restore-backup-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/_restore-backup-csharp.mdx
new file mode 100644
index 0000000000..2ce99aa7b8
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/_restore-backup-csharp.mdx
@@ -0,0 +1,60 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+* To restore a database from its backup, use `RestoreBackupOperation`.
+* A backup can also be restored using [Studio](../../../studio/database/create-new-database/from-backup.mdx).
+
+## Syntax
+
+
+
+{`public RestoreBackupOperation(RestoreBackupConfiguration restoreConfiguration)
+`}
+
+
+
+
+
+{`public class RestoreBackupConfiguration
+\{
+ public string DatabaseName \{ get; set; \}
+
+ public string BackupLocation \{ get; set; \}
+
+ public string LastFileNameToRestore \{ get; set; \}
+
+ public string DataDirectory \{ get; set; \}
+
+ public string EncryptionKey \{ get; set; \}
+\}
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **DatabaseName** | string | Database name to create during the restore operation |
+| **BackupLocation** | string | Directory containing backup files |
+| **LastFileNameToRestore** | string | Used for partial restore |
+| **DataDirectory** | string | Optional: Database data directory |
+| **EncryptionKey** | string | Encryption key used for restore |
+
+## Example
+
+
+
+{`RestoreBackupConfiguration config = new RestoreBackupConfiguration()
+\{
+ BackupLocation = @"C:\\backups\\Northwind",
+ DatabaseName = "Northwind"
+\};
+RestoreBackupOperation restoreOperation = new RestoreBackupOperation(config);
+store.Maintenance.Server.Send(restoreOperation)
+ .WaitForCompletion();
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/_restore-backup-java.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/_restore-backup-java.mdx
new file mode 100644
index 0000000000..f6fa3ca910
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/_restore-backup-java.mdx
@@ -0,0 +1,106 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+* To restore a database from its backup, use `RestoreBackupOperation`.
+* A backup can also be restored using [Studio](../../../studio/database/create-new-database/from-backup.mdx).
+
+## Syntax
+
+
+
+{`public RestoreBackupOperation(RestoreBackupConfigurationBase restoreConfiguration);
+
+public RestoreBackupOperation(RestoreBackupConfigurationBase restoreConfiguration, String nodeTag);
+`}
+
+
+
+
+
+{`public abstract class RestoreBackupConfigurationBase \{
+
+
+ public String getDatabaseName() \{
+ return databaseName;
+ \}
+
+ public void setDatabaseName(String databaseName) \{
+ this.databaseName = databaseName;
+ \}
+
+ public String getLastFileNameToRestore() \{
+ return lastFileNameToRestore;
+ \}
+
+ public void setLastFileNameToRestore(String lastFileNameToRestore) \{
+ this.lastFileNameToRestore = lastFileNameToRestore;
+ \}
+
+ public String getDataDirectory() \{
+ return dataDirectory;
+ \}
+
+ public void setDataDirectory(String dataDirectory) \{
+ this.dataDirectory = dataDirectory;
+ \}
+
+ public String getEncryptionKey() \{
+ return encryptionKey;
+ \}
+
+ public void setEncryptionKey(String encryptionKey) \{
+ this.encryptionKey = encryptionKey;
+ \}
+
+ public boolean isDisableOngoingTasks() \{
+ return disableOngoingTasks;
+ \}
+
+ public void setDisableOngoingTasks(boolean disableOngoingTasks) \{
+ this.disableOngoingTasks = disableOngoingTasks;
+ \}
+
+ public boolean isSkipIndexes() \{
+ return skipIndexes;
+ \}
+
+ public void setSkipIndexes(boolean skipIndexes) \{
+ this.skipIndexes = skipIndexes;
+ \}
+
+ public BackupEncryptionSettings getBackupEncryptionSettings() \{
+ return backupEncryptionSettings;
+ \}
+
+ public void setBackupEncryptionSettings(BackupEncryptionSettings backupEncryptionSettings) \{
+ this.backupEncryptionSettings = backupEncryptionSettings;
+ \}
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **databaseName** | String | Database name to create during the restore operation |
+| **lastFileNameToRestore** | String | Used for partial restore |
+| **dataDirectory** | String | Optional: Database data directory |
+| **encryptionKey** | String | Encryption key used for restore |
+| **disableOngoingTasks** | boolean | Disable on doing tasks |
+| **skipIndexes** | boolean | Skip the indexes|
+
+## Example
+
+
+
+{`RestoreBackupConfiguration config = new RestoreBackupConfiguration();
+config.setBackupLocation("C:\\\\backups\\\\Northwind");
+config.setDatabaseName("Northwind");
+RestoreBackupOperation restoreOperation = new RestoreBackupOperation(config);
+store.maintenance().server().send(restoreOperation);
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/_restore-backup-nodejs.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/_restore-backup-nodejs.mdx
new file mode 100644
index 0000000000..7c3520d6b6
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/_restore-backup-nodejs.mdx
@@ -0,0 +1,59 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+* To restore a database from its backup, use `RestoreBackupOperation`.
+* A backup can also be restored using [Studio](../../../studio/database/create-new-database/from-backup.mdx).
+
+## Syntax
+
+
+
+{`const restoreBackupOperation = new RestoreBackupOperation(restoreConfiguration, "nodeTag");
+`}
+
+
+
+
+
+{`export interface RestoreBackupConfigurationBase \{
+ databaseName,
+ lastFileNameToRestore,
+ dataDirectory,
+ encryptionKey,
+ disableOngoingTasks,
+ skipIndexes,
+ type,
+ backupEncryptionSettings
+\}
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **databaseName** | string | Database name to create during the restore operation |
+| **lastFileNameToRestore** | string | Used for partial restore |
+| **dataDirectory** | string | Optional: Database data directory |
+| **encryptionKey** | string | Encryption key used for restore |
+| **disableOngoingTasks** | boolean | true/false to disable/enable Ongoing Tasks|
+| **skipIndexes** | boolean | true/false to disable/enable indexes import|
+| **type** | RestoreType | Encryption key used for restore |
+| **backupEncryptionSettings** | BackupEncryptionSettings | Backup encryption settings |
+
+## Example
+
+
+
+{`restoreConfiguration = \{
+ databaseName: "Northwind",
+ skipIndexes: false
+\}
+const restoreBackupOperation = RestoreBackupOperation(restoreConfiguration, "A");
+const restoreResult = await store.maintenance.server.send(restoreBackupOperation);
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/_toggle-databases-state-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/_toggle-databases-state-csharp.mdx
new file mode 100644
index 0000000000..936ec32a8b
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/_toggle-databases-state-csharp.mdx
@@ -0,0 +1,139 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `ToggleDatabasesStateOperation` to enable/disable a single database or multiple databases.
+
+* The database will be enabled/disabled on all nodes in the [database-group](../../../studio/database/settings/manage-database-group.mdx).
+
+* In this page:
+
+ * [Enable/Disable database from the Client API](../../../client-api/operations/server-wide/toggle-databases-state.mdx#enable/disable-database-from-the-client-api)
+ * [Enable database](../../../client-api/operations/server-wide/toggle-databases-state.mdx#enable-database)
+ * [Disable database](../../../client-api/operations/server-wide/toggle-databases-state.mdx#disable-database)
+ * [Syntax](../../../client-api/operations/server-wide/toggle-databases-state.mdx#syntax)
+ * [Disable database via the file system](../../../client-api/operations/server-wide/toggle-databases-state.mdx#disable-database-via-the-file-system)
+
+
+## Enable/Disable database from the Client API
+
+#### Enable database:
+
+
+
+
+{`// Define the toggle state operation
+// specify the database name & pass 'false' to enable
+var enableDatabaseOp = new ToggleDatabasesStateOperation("Northwind", disable: false);
+
+// To enable multiple databases use:
+// var enableDatabaseOp =
+// new ToggleDatabasesStateOperation(new [] { "DB1", "DB2", ... }, disable: false);
+
+// Execute the operation by passing it to Maintenance.Server.Send
+var toggleResult = documentStore.Maintenance.Server.Send(enableDatabaseOp);
+`}
+
+
+
+
+{`// Define the toggle state operation
+// specify the database name(s) & pass 'false' to enable
+var enableDatabaseOp = new ToggleDatabasesStateOperation(new [] { "Foo", "Bar" }, disable: false);
+
+// Execute the operation by passing it to Maintenance.Server.SendAsync
+var toggleResult = await documentStore.Maintenance.Server.SendAsync(enableDatabaseOp);
+`}
+
+
+
+#### Disable database:
+
+
+
+
+{`// Define the toggle state operation
+// specify the database name(s) & pass 'true' to disable
+var disableDatabaseOp = new ToggleDatabasesStateOperation("Northwind", disable: true);
+
+// To disable multiple databases use:
+// var disableDatabaseOp =
+// new ToggleDatabasesStateOperation(new [] { "DB1", "DB2", ... }, disable: true);
+
+// Execute the operation by passing it to Maintenance.Server.Send
+var toggleResult = documentStore.Maintenance.Server.Send(disableDatabaseOp);
+`}
+
+
+
+
+{`// Define the toggle state operation
+// specify the database name(s) & pass 'true' to disable
+var disableDatabaseOp = new ToggleDatabasesStateOperation("Northwind", disable: true);
+
+// Execute the operation by passing it to Maintenance.Server.SendAsync
+var toggleResult = await documentStore.Maintenance.Server.SendAsync(disableDatabaseOp);
+`}
+
+
+
+#### Syntax:
+
+
+
+{`// Available overloads:
+public ToggleDatabasesStateOperation(string databaseName, bool disable)
+public ToggleDatabasesStateOperation(string[] databaseNames, bool disable)
+`}
+
+
+
+| Parameter | Type | Description |
+|-------------------|------------|-------------------------------------------------------------------------------------------|
+| **databaseName** | `string` | Name of database for which to toggle state |
+| **databaseNames** | `string[]` | List of database names for which to toggle state |
+| **disable** | `bool` | `true` - request to disable the database(s) `false`- request to enable the database(s) |
+
+
+
+{`// Executing the operation returns the following object:
+public class DisableDatabaseToggleResult
+\{
+ public bool Disabled; // Is database disabled
+ public string Name; // Name of the database
+ public bool Success; // Has request succeeded
+ public string Reason; // Reason for success or failure
+\}
+`}
+
+
+
+
+
+## Disable database via the file system
+
+It may sometimes be useful to disable a database manually, through the file system.
+
+* To **manually disable** a database:
+
+ * Place a file named `disable.marker` in the [database directory](../../../server/storage/directory-structure.mdx).
+ * The `disable.marker` file can be empty,
+ and can be created by any available method, e.g. using the File Explorer, a terminal, or code.
+
+* Attempting to use a manually disabled database will generate the following exception:
+
+ Unable to open database: '{DatabaseName}',
+ it has been manually disabled via the file: '{disableMarkerPath}'.
+ To re-enable, remove the disable.marker and reload the database.
+
+* To **enable** a manually disabled database:
+
+ * First, remove the `disable.marker` file from the database directory.
+ * Then, [reload the database](../../../studio/database/settings/database-settings.mdx#how-to-reload-the-database).
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/_toggle-databases-state-nodejs.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/_toggle-databases-state-nodejs.mdx
new file mode 100644
index 0000000000..31de0141ed
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/_toggle-databases-state-nodejs.mdx
@@ -0,0 +1,123 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `ToggleDatabasesStateOperation` to enable/disable a single database or multiple databases.
+
+* The database will be enabled/disabled on all nodes in the [database-group](../../../studio/database/settings/manage-database-group.mdx).
+
+* In this page:
+
+ * [Enable/Disable database from the Client API](../../../client-api/operations/server-wide/toggle-databases-state.mdx#enable/disable-database-from-the-client-api)
+ * [Enable database](../../../client-api/operations/server-wide/toggle-databases-state.mdx#enable-database)
+ * [Disable database](../../../client-api/operations/server-wide/toggle-databases-state.mdx#disable-database)
+ * [Syntax](../../../client-api/operations/server-wide/toggle-databases-state.mdx#syntax)
+ * [Disable database via the file system](../../../client-api/operations/server-wide/toggle-databases-state.mdx#disable-database-via-the-file-system)
+
+
+## Enable/Disable database from the Client API
+
+
+
+ **Enable database**:
+
+
+
+{`// Define the toggle state operation
+// specify the database name & pass 'false' to enable
+const enableDatabaseOp = new ToggleDatabasesStateOperation("Northwind", false);
+
+// To enable multiple databases use:
+// const enableDatabaseOp =
+// new ToggleDatabasesStateOperation(["DB1", "DB2", ...], false);
+
+// Execute the operation by passing it to maintenance.server.send
+const toggleResult = await documentStore.maintenance.server.send(enableDatabaseOp);
+`}
+
+
+
+
+
+
+ **Disable database**:
+
+
+
+{`// Define the toggle state operation
+// specify the database name(s) & pass 'true' to disable
+const disableDatabaseOp = new ToggleDatabasesStateOperation("Northwind", true);
+
+// To disable multiple databases use:
+// const disableDatabaseOp =
+// new ToggleDatabasesStateOperation(["DB1", "DB2", ...], true);
+
+// Execute the operation by passing it to maintenance.server.send
+const toggleResult = await documentStore.maintenance.server.send(disableDatabaseOp);
+`}
+
+
+
+
+
+
+ **Syntax**:
+
+
+
+{`// Available overloads:
+const enableDatabaseOp = new ToggleDatabasesStateOperation(databaseName, disable);
+const enableDatabaseOp = new ToggleDatabasesStateOperation(databaseNames, disable);
+`}
+
+
+
+| Parameter | Type | Description |
+|-------------------|----------|---------------------------------------------------------------------------------------------|
+| **databaseName** | `string` | Name of database for which to toggle state |
+| **databaseNames** | `string[]` | List of database names for which to toggle state |
+| **disable** | `boolean` | `true` - request to disable the database(s) `false`- request to enable the database(s) |
+
+
+
+{`// Executing the operation returns an object with the following properties:
+\{
+ disabled, // Is database disabled
+ name, // Name of the database
+ success, // Has request succeeded
+ reason // Reason for success or failure
+\}
+`}
+
+
+
+
+
+
+## Disable database via the file system
+
+It may sometimes be useful to disable a database manually, through the file system.
+
+* To **manually disable** a database:
+
+ * Place a file named `disable.marker` in the [database directory](../../../server/storage/directory-structure.mdx).
+ * The `disable.marker` file can be empty,
+ and can be created by any available method, e.g. using the File Explorer, a terminal, or code.
+
+* Attempting to use a manually disabled database will generate the following exception:
+
+ Unable to open database: '{DatabaseName}',
+ it has been manually disabled via the file: '{disableMarkerPath}'.
+ To re-enable, remove the disable.marker and reload the database.
+
+* To **enable** a manually disabled database:
+
+ * First, remove the `disable.marker` file from the database directory.
+ * Then, [reload the database](../../../studio/database/settings/database-settings.mdx#how-to-reload-the-database).
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/_toggle-databases-state-php.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/_toggle-databases-state-php.mdx
new file mode 100644
index 0000000000..ea80cc6f36
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/_toggle-databases-state-php.mdx
@@ -0,0 +1,113 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `ToggleDatabasesStateOperation` to enable/disable a single database or multiple databases.
+
+* The database will be enabled/disabled on all nodes in the [database-group](../../../studio/database/settings/manage-database-group.mdx).
+
+* In this page:
+
+ * [Enable/Disable database from the Client API](../../../client-api/operations/server-wide/toggle-databases-state.mdx#enable/disable-database-from-the-client-api)
+ * [Enable database](../../../client-api/operations/server-wide/toggle-databases-state.mdx#enable-database)
+ * [Disable database](../../../client-api/operations/server-wide/toggle-databases-state.mdx#disable-database)
+ * [Syntax](../../../client-api/operations/server-wide/toggle-databases-state.mdx#syntax)
+ * [Disable database via the file system](../../../client-api/operations/server-wide/toggle-databases-state.mdx#disable-database-via-the-file-system)
+
+
+## Enable/Disable database from the Client API
+
+#### Enable database:
+
+
+
+{`// Define the toggle state operation
+// specify the database name & pass 'false' to enable
+$enableDatabaseOp = new ToggleDatabasesStateOperation("Northwind", false);
+
+// To enable multiple databases use:
+// $enableDatabaseOp = new ToggleDatabasesStateOperation([ "DB1", "DB2", ... ], false);
+
+// Execute the operation by passing it to Maintenance.Server.Send
+/** @var DisableDatabaseToggleResult $toggleResult */
+$toggleResult = $documentStore->maintenance()->server()->send($enableDatabaseOp);
+`}
+
+
+#### Disable database:
+
+
+
+{`// Define the toggle state operation
+// specify the database name(s) & pass 'true' to disable
+$disableDatabaseOp = new ToggleDatabasesStateOperation("Northwind", true);
+
+// To disable multiple databases use:
+// $disableDatabaseOp = new ToggleDatabasesStateOperation([ "DB1", "DB2", ... ], true);
+
+// Execute the operation by passing it to Maintenance.Server.Send
+/** @var DisableDatabaseToggleResult $toggleResult */
+$toggleResult = $documentStore->maintenance()->server()->send($disableDatabaseOp);
+`}
+
+
+#### Syntax:
+
+
+
+{`class ToggleDatabasesStateOperation(ServerOperation[DisableDatabaseToggleResult]):
+ def __init__(self, database_name: str, disable: bool): ...
+ @classmethod
+ def from_multiple_names(cls, database_names: List[str], disable: bool): ...
+`}
+
+
+
+| Parameter | Type | Description |
+|--------------------|---------|---------------------------------------------------------------------------------------|
+| **$databaseName** | `string` / `StringArray` / `array` | Name or list of names of database/s whose state to toggle |
+| **$disable** | `bool` | `true` - request to disable the database(s) `talse`- request to enable the database(s) |
+
+
+
+{`class DisableDatabaseToggleResult:
+ def __init__(
+ self, disabled: bool = None, name: str = None, success: bool = None, reason: str = None
+ ) -> None:
+ self.disabled = disabled # Is database disabled
+ self.name = name # Name of the database
+ self.success = success # Has request succeeded
+ self.reason = reason # Reason for success or failure
+`}
+
+
+
+
+
+## Disable database via the file system
+
+It may sometimes be useful to disable a database manually, through the file system.
+
+* To **manually disable** a database:
+
+ * Place a file named `disable.marker` in the [database directory](../../../server/storage/directory-structure.mdx).
+ * The `disable.marker` file can be empty,
+ and can be created by any available method, e.g. using the File Explorer, a terminal, or code.
+
+* Attempting to use a manually disabled database will generate the following exception:
+
+ Unable to open database: '{DatabaseName}',
+ it has been manually disabled via the file: '{disableMarkerPath}'.
+ To re-enable, remove the disable.marker and reload the database.
+
+* To **enable** a manually disabled database:
+
+ * First, remove the `disable.marker` file from the database directory.
+ * Then, [reload the database](../../../studio/database/settings/database-settings.mdx#how-to-reload-the-database).
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/_toggle-databases-state-python.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/_toggle-databases-state-python.mdx
new file mode 100644
index 0000000000..9dad0f4dda
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/_toggle-databases-state-python.mdx
@@ -0,0 +1,112 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `ToggleDatabasesStateOperation` to enable/disable a single database or multiple databases.
+
+* The database will be enabled/disabled on all nodes in the [database-group](../../../studio/database/settings/manage-database-group.mdx).
+
+* In this page:
+
+ * [Enable/Disable database from the Client API](../../../client-api/operations/server-wide/toggle-databases-state.mdx#enable/disable-database-from-the-client-api)
+ * [Enable database](../../../client-api/operations/server-wide/toggle-databases-state.mdx#enable-database)
+ * [Disable database](../../../client-api/operations/server-wide/toggle-databases-state.mdx#disable-database)
+ * [Syntax](../../../client-api/operations/server-wide/toggle-databases-state.mdx#syntax)
+ * [Disable database via the file system](../../../client-api/operations/server-wide/toggle-databases-state.mdx#disable-database-via-the-file-system)
+
+
+## Enable/Disable database from the Client API
+
+#### Enable database:
+
+
+
+{`# Define the toggle state operation
+# specify the database name & pass 'False' to enable
+enable_database_op = ToggleDatabasesStateOperation("Northwind", disable=False)
+
+# To enable multiple databases use:
+# enable_database_op = ToggleDatabasesStateOperation.from_multiple_names(["DB1", "DB2", ...], disable=False)
+
+# Execute the operation by passing it to maintenance.server.send
+toggle_result = store.maintenance.server.send(enable_database_op)
+`}
+
+
+#### Disable database:
+
+
+
+{`# Define the toggle state operation
+# specify the database name(s) & pass 'True' to disable
+disable_database_op = ToggleDatabasesStateOperation("Northwind", disable=True)
+
+# To disable multiple databases use:
+# enable_database_op = ToggleDatabasesStateOperation.from_multiple_names(["DB1", "DB2", ...], disable=True)
+
+# Execute the operation by passing it to maintenance.server.send
+toggle_result = store.maintenance.server.send(disable_database_op)
+`}
+
+
+#### Syntax:
+
+
+
+{`class ToggleDatabasesStateOperation(ServerOperation[DisableDatabaseToggleResult]):
+ def __init__(self, database_name: str, disable: bool): ...
+ @classmethod
+ def from_multiple_names(cls, database_names: List[str], disable: bool): ...
+`}
+
+
+
+| Parameter | Type | Description |
+|--------------------|---------|-------------------------------------------------------------------------------------------|
+| **database_name** | `str` | Name of database for which to toggle state |
+| **database_names** | `str[]` | List of database names for which to toggle state |
+| **disable** | `bool` | `True` - request to disable the database(s) `False`- request to enable the database(s) |
+
+
+
+{`class DisableDatabaseToggleResult:
+ def __init__(
+ self, disabled: bool = None, name: str = None, success: bool = None, reason: str = None
+ ) -> None:
+ self.disabled = disabled # Is database disabled
+ self.name = name # Name of the database
+ self.success = success # Has request succeeded
+ self.reason = reason # Reason for success or failure
+`}
+
+
+
+
+
+## Disable database via the file system
+
+It may sometimes be useful to disable a database manually, through the file system.
+
+* To **manually disable** a database:
+
+ * Place a file named `disable.marker` in the [database directory](../../../server/storage/directory-structure.mdx).
+ * The `disable.marker` file can be empty,
+ and can be created by any available method, e.g. using the File Explorer, a terminal, or code.
+
+* Attempting to use a manually disabled database will generate the following exception:
+
+ Unable to open database: '{DatabaseName}',
+ it has been manually disabled via the file: '{disableMarkerPath}'.
+ To re-enable, remove the disable.marker and reload the database.
+
+* To **enable** a manually disabled database:
+
+ * First, remove the `disable.marker` file from the database directory.
+ * Then, [reload the database](../../../studio/database/settings/database-settings.mdx#how-to-reload-the-database).
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/_toggle-dynamic-database-distribution-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/_toggle-dynamic-database-distribution-csharp.mdx
new file mode 100644
index 0000000000..8817946dc5
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/_toggle-dynamic-database-distribution-csharp.mdx
@@ -0,0 +1,46 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+#Operations: Server: Toggle Dynamic Database Distribution
+
+
+* In [dynamic database distribution](../../../server/clustering/distribution/distributed-database.mdx#dynamic-database-distribution) mode,
+if a database node is down, another cluster node is added to the database group to compensate.
+
+* Use this operation to toggle dynamic distribution for a particular database group.
+
+* This can also be done [in the studio](../../../studio/database/settings/manage-database-group.mdx#database-group-topology---actions) under
+database group settings.
+
+
+
+
+
+
+
+
+{`public SetDatabaseDynamicDistributionOperation(string databaseName, bool allowDynamicDistribution)
+`}
+
+
+
+| Parameters | Type | Description |
+| - | - | - |
+| **databaseName** | string | Name of database group |
+| **allowDynamicDistribution** | bool | Set to `true` to activate dynamic distribution mode. |
+### Example
+
+
+
+{`SetDatabaseDynamicDistributionOperation operation =
+ new SetDatabaseDynamicDistributionOperation("NorthWind", true);
+documentStore.Maintenance.Server.Send(operation);
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/add-database-node.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/add-database-node.mdx
new file mode 100644
index 0000000000..1334b92e80
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/add-database-node.mdx
@@ -0,0 +1,39 @@
+---
+title: "Adding a Database Node"
+hide_table_of_contents: true
+sidebar_label: Add Database Node
+sidebar_position: 0
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import AddDatabaseNodeCsharp from './_add-database-node-csharp.mdx';
+import AddDatabaseNodePython from './_add-database-node-python.mdx';
+import AddDatabaseNodePhp from './_add-database-node-php.mdx';
+import AddDatabaseNodeNodejs from './_add-database-node-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "python", "php", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/_category_.json b/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/_category_.json
new file mode 100644
index 0000000000..66d2a31fb6
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/_category_.json
@@ -0,0 +1,4 @@
+{
+ "position": 12,
+ "label": Certificates,
+}
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/_create-client-certificate-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/_create-client-certificate-csharp.mdx
new file mode 100644
index 0000000000..b67a23d5a1
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/_create-client-certificate-csharp.mdx
@@ -0,0 +1,98 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* You can generate a client certificate using **CreateClientCertificateOperation**.
+
+* Learn the rationale needed to properly define client certificates in [The RavenDB Security Authorization Approach](../../../../server/security/authentication/certificate-management.mdx#the-ravendb-security-authorization-approach)
+
+
+
+## Syntax
+
+
+
+{`public CreateClientCertificateOperation(string name,
+ Dictionary permissions,
+ SecurityClearance clearance,
+ string password = null)
+`}
+
+
+
+
+
+{`// The role assigned to the certificate:
+public enum SecurityClearance
+\{
+ ClusterAdmin,
+ ClusterNode,
+ Operator,
+ ValidUser
+\}
+`}
+
+
+
+
+
+{`// The access level for a 'ValidUser' security clearance:
+public enum DatabaseAccess
+\{
+ Read,
+ ReadWrite,
+ Admin
+\}
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **name** | string | Name of a certificate |
+| **permissions** | Dictionary<string, DatabaseAccess> | Dictionary mapping databases to access level |
+| **clearance** | SecurityClearance | Access level |
+| **password** | string | Optional certificate password, default: no password |
+
+| Return Value | |
+| ------------- | ----- |
+| **RawData** | client certificate raw data |
+
+## Example I
+
+
+
+{`// With the security clearance set to Cluster Administrator or Operator,
+// the user of this certificate will have access to all databases
+CreateClientCertificateOperation operation =
+ new CreateClientCertificateOperation(
+ "admin", null, SecurityClearance.Operator);
+CertificateRawData certificateRawData =
+ store.Maintenance.Server.Send(operation);
+byte[] cert = certificateRawData.RawData;
+`}
+
+
+
+## Example II
+
+
+
+{`// When the security clearance is ValidUser, you must specify an access level for each database
+CreateClientCertificateOperation operation =
+ new CreateClientCertificateOperation(
+ "user1", new Dictionary
+\{
+ \{ "Northwind", DatabaseAccess.Admin \}
+\}, SecurityClearance.ValidUser, "myPassword");
+CertificateRawData certificateRawData =
+ store.Maintenance.Server.Send(operation);
+byte[] cert = certificateRawData.RawData;
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/_create-client-certificate-java.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/_create-client-certificate-java.mdx
new file mode 100644
index 0000000000..a2e5b247e3
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/_create-client-certificate-java.mdx
@@ -0,0 +1,95 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* You can generate a client certificate using **CreateClientCertificateOperation**.
+
+* Learn the rationale needed to properly define client certificates in [The RavenDB Security Authorization Approach](../../../../server/security/authentication/certificate-management.mdx#the-ravendb-security-authorization-approach)
+
+
+
+## Syntax
+
+
+
+{`public CreateClientCertificateOperation(String name,
+ Map permissions,
+ SecurityClearance clearance)
+
+public CreateClientCertificateOperation(String name,
+ Map permissions,
+ SecurityClearance clearance,
+ String password)
+`}
+
+
+
+
+
+{`public enum SecurityClearance \{
+ CLUSTER_ADMIN,
+ CLUSTER_NODE,
+ OPERATOR,
+ VALID_USER
+\}
+`}
+
+
+
+
+
+{`public enum DatabaseAccess \{
+ READ,
+ READ_WRITE,
+ ADMIN
+\}
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **name** | String | Name of a certificate |
+| **permissions** | Map<String, DatabaseAccess> | Map with database to access level mapping |
+| **clearance** | SecurityClearance | Access level |
+| **password** | String | Optional certificate password, default: no password |
+
+| Return Value | |
+| ------------- | ----- |
+| **RawData** | client certificate raw data |
+
+## Example I
+
+
+
+{`// With user role set to Cluster Administrator or Operator the user of this certificate
+// is going to have access to all databases
+
+CreateClientCertificateOperation operation = new CreateClientCertificateOperation("admin",
+ null, SecurityClearance.OPERATOR);
+CertificateRawData certificateRawData = store.maintenance().server().send(operation);
+byte[] certificatesZipped = certificateRawData.getRawData();
+`}
+
+
+
+## Example II
+
+
+
+{`// when security clearance is ValidUser, you need to specify per database permissions
+CreateClientCertificateOperation operation = new CreateClientCertificateOperation("user1",
+ Collections.singletonMap("Northwind", DatabaseAccess.ADMIN),
+ SecurityClearance.VALID_USER,
+ "myPassword");
+
+CertificateRawData certificateRawData = store.maintenance().server().send(operation);
+byte[] certificateZipped = certificateRawData.getRawData();
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/_create-client-certificate-nodejs.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/_create-client-certificate-nodejs.mdx
new file mode 100644
index 0000000000..64b26c64ed
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/_create-client-certificate-nodejs.mdx
@@ -0,0 +1,79 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* You can generate a client certificate using **CreateClientCertificateOperation**.
+
+* Learn the rationale needed to properly define client certificates in [The RavenDB Security Authorization Approach](../../../../server/security/authentication/certificate-management.mdx#the-ravendb-security-authorization-approach)
+
+
+
+## Usage
+
+
+
+{`const cert1 = await store.maintenance.server.send(
+ new CreateClientCertificateOperation([name], [permissions], [clearance], [password]));
+`}
+
+
+
+`SecurityClearance` options:
+
+* `UnauthenticatedClients`
+* `ClusterAdmin`
+* `ClusterNode`
+* `Operator`
+* `ValidUser`
+
+`DatabaseAccess ` options:
+
+* `ReadWrite`
+* `Admin`
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **name** | string | Name of a certificate |
+| **permissions** | Record<string, DatabaseAccess> | Record mapping databases to access level |
+| **clearance** | SecurityClearance | Access level |
+| **password** | string | Optional certificate password, default: no password |
+
+| Return Value | |
+| ------------- | ----- |
+| **RawData** | client certificate raw data |
+
+## Example I
+
+
+
+{`// With user role set to Cluster Administrator or Operator the user of this certificate
+// is going to have access to all databases
+const clientCertificateOperation = await store.maintenance.server.send(
+ new CreateClientCertificateOperation("admin", \{\}, "Operator"));
+const certificateRawData = clientCertificateOperation.rawData;
+`}
+
+
+
+## Example II
+
+
+
+{`// when security clearance is ValidUser, you need to specify per database permissions
+
+const clearance = \{
+ [store.database]: "ReadWrite"
+\};
+ \}
+
+t clientCertificateOperation = await store.maintenance.server.send(
+new CreateClientCertificateOperation("user1", clearance, "ValidUser", "myPassword"));
+t certificateRawData = clientCertificateOperation.rawData;
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/_delete-certificate-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/_delete-certificate-csharp.mdx
new file mode 100644
index 0000000000..7c37121dec
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/_delete-certificate-csharp.mdx
@@ -0,0 +1,31 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+You can delete a client certificate using the **DeleteCertificateOperation**.
+
+## Syntax
+
+
+
+{`public DeleteCertificateOperation(string thumbprint)
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **thumbprint** | string | The certificate thumbprint |
+
+## Example I
+
+
+
+{`string thumbprint = "a909502dd82ae41433e6f83886b00d4277a32a7b";
+store.Maintenance.Server.Send(new DeleteCertificateOperation(thumbprint));
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/_delete-certificate-java.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/_delete-certificate-java.mdx
new file mode 100644
index 0000000000..3bfcfd4593
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/_delete-certificate-java.mdx
@@ -0,0 +1,31 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+You can delete client certificate using **DeleteCertificateOperation**.
+
+## Syntax
+
+
+
+{`public DeleteCertificateOperation(String thumbprint);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **thumbprint** | String | The certificate thumbprint |
+
+## Example I
+
+
+
+{`String thumbprint = "a909502dd82ae41433e6f83886b00d4277a32a7b";
+store.maintenance().server().send(new DeleteCertificateOperation(thumbprint));
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/_delete-certificate-nodejs.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/_delete-certificate-nodejs.mdx
new file mode 100644
index 0000000000..ace71e94f0
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/_delete-certificate-nodejs.mdx
@@ -0,0 +1,31 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+You can delete a client certificate using the **DeleteCertificateOperation**.
+
+## Usage
+
+
+
+{`await store.maintenance.server.send(new DeleteCertificateOperation([thumbprint]));
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **thumbprint** | string | The certificate thumbprint |
+
+## Example I
+
+
+
+{`const thumbprint = "a909502dd82ae41433e6f83886b00d4277a32a7b";
+await store.maintenance.server.send(new DeleteCertificateOperation(thumbprint));
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/_get-certificate-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/_get-certificate-csharp.mdx
new file mode 100644
index 0000000000..887b498f1e
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/_get-certificate-csharp.mdx
@@ -0,0 +1,35 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+To get a client certificate by thumbprint use **GetCertificateOperation**.
+
+## Syntax
+
+
+
+{`public GetCertificateOperation(string thumbprint)
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **thumbprint** | string | Certificate thumbprint |
+
+| Return Value | |
+| ------------- | ----- |
+| `CertificateDefinition` | Certificate definition |
+
+## Example
+
+
+
+{`string thumbprint = "a909502dd82ae41433e6f83886b00d4277a32a7b";
+CertificateDefinition definition =
+ store.Maintenance.Server.Send(new GetCertificateOperation(thumbprint));
+`}
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/_get-certificate-java.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/_get-certificate-java.mdx
new file mode 100644
index 0000000000..d5b7bc8440
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/_get-certificate-java.mdx
@@ -0,0 +1,36 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+To get a client certificate by thumbprint use **GetCertificateOperation**.
+
+## Syntax
+
+
+
+{`public GetCertificateOperation(String thumbprint)
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **thumbprint** | String | Certificate thumbprint |
+
+| Return Value | |
+| ------------- | ----- |
+| `CertificateDefinition` | Certificate definition |
+
+## Example
+
+
+
+{`String thumbprint = "a909502dd82ae41433e6f83886b00d4277a32a7b";
+CertificateDefinition definition = store.maintenance()
+ .server()
+ .send(new GetCertificateOperation(thumbprint));
+`}
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/_get-certificates-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/_get-certificates-csharp.mdx
new file mode 100644
index 0000000000..de8b78ec93
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/_get-certificates-csharp.mdx
@@ -0,0 +1,35 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+To get client certificates use **GetCertificatesOperation**.
+
+## Syntax
+
+
+
+{`public GetCertificatesOperation(int start, int pageSize)
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **pageSize** | int | Maximum number of records that will be downloaded |
+| **start** | int | Number of records that should be skipped |
+
+| Return Value | |
+| ------------- | ----- |
+| `CertificateDefinition[]` | Array of certificate definitions |
+
+## Example
+
+
+
+{`CertificateDefinition[] definitions =
+ store.Maintenance.Server.Send(new GetCertificatesOperation(0, 20));
+`}
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/_get-certificates-java.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/_get-certificates-java.mdx
new file mode 100644
index 0000000000..f7796a7c27
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/_get-certificates-java.mdx
@@ -0,0 +1,36 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+To get client certificates use **GetCertificatesOperation**.
+
+## Syntax
+
+
+
+{`public GetCertificatesOperation(int start, int pageSize)
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **pageSize** | int | Maximum number of records that will be downloaded |
+| **start** | int | Number of records that should be skipped |
+
+| Return Value | |
+| ------------- | ----- |
+| `CertificateDefinition[]` | Array of certificate definitions |
+
+## Example
+
+
+
+{`CertificateDefinition[] definitions = store.maintenance()
+ .server()
+ .send(new GetCertificatesOperation(0, 20));
+`}
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/_put-client-certificate-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/_put-client-certificate-csharp.mdx
new file mode 100644
index 0000000000..3797ee28a4
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/_put-client-certificate-csharp.mdx
@@ -0,0 +1,105 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `PutClientCertificateOperation` to register an existing client certificate.
+
+* To register an existing client certificate from the Studio,
+ see [Upload an existing client certificate](../../../../studio/server/certificates/server-management-certificates-view.mdx#upload-an-existing-client-certificate).
+
+* In this article:
+ * [Put client certificate example](../../../../client-api/operations/server-wide/certificates/put-client-certificate.mdx#put-client-certificate-example)
+ * [Syntax](../../../../client-api/operations/server-wide/certificates/put-client-certificate.mdx#syntax)
+
+
+## Put client certificate example
+
+
+
+
+{`X509Certificate2 certificate = new X509Certificate2("c:\\\\path_to_pfx_file");
+
+// Define the put client certificate operation
+var putClientCertificateOp = new PutClientCertificateOperation(
+ "certificateName",
+ certificate,
+ new Dictionary(),
+ SecurityClearance.ClusterAdmin);
+
+// Execute the operation by passing it to Maintenance.Server.Send
+store.Maintenance.Server.Send(putClientCertificateOp);
+`}
+
+
+
+
+{`X509Certificate2 certificate = new X509Certificate2("c:\\\\path_to_pfx_file");
+
+// Define the put client certificate operation
+var putClientCertificateOp = new PutClientCertificateOperation(
+ "certificateName",
+ certificate,
+ new Dictionary(),
+ SecurityClearance.ClusterAdmin);
+
+// Execute the operation by passing it to Maintenance.Server.SendAsync
+await store.Maintenance.Server.SendAsync(putClientCertificateOp);
+`}
+
+
+
+
+
+
+## Syntax
+
+
+
+{`public PutClientCertificateOperation(
+ string name,
+ X509Certificate2 certificate,
+ Dictionary permissions,
+ SecurityClearance clearance)
+`}
+
+
+
+| Parameter | Type | Description |
+|-----------------|--------------------------------------|-----------------------------------------------------------------------------------------------------|
+| **name** | `string` | A name for the certificate. |
+| **certificate** | `X509Certificate2` | The certificate to register. |
+| **permissions** | `Dictionary` | A dictionary mapping database name to access level. Relevant only when clearance is `ValidUser`. |
+| **clearance** | `SecurityClearance` | Access level (role) assigned to the certificate. |
+
+
+
+{`// The role assigned to the certificate:
+public enum SecurityClearance
+\{
+ ClusterAdmin,
+ ClusterNode,
+ Operator,
+ ValidUser
+\}
+`}
+
+
+
+
+{`// The access level for a 'ValidUser' security clearance:
+public enum DatabaseAccess
+\{
+ Read,
+ ReadWrite,
+ Admin
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/_put-client-certificate-java.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/_put-client-certificate-java.mdx
new file mode 100644
index 0000000000..c4816e7e37
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/_put-client-certificate-java.mdx
@@ -0,0 +1,80 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `PutClientCertificateOperation` to register an existing client certificate.
+
+* To register an existing client certificate from the Studio,
+ see [Upload an existing client certificate](../../../../studio/server/certificates/server-management-certificates-view.mdx#upload-an-existing-client-certificate).
+
+* In this article:
+ * [Put client certificate example](../../../../client-api/operations/server-wide/certificates/put-client-certificate.mdx#put-client-certificate-example)
+ * [Syntax](../../../../client-api/operations/server-wide/certificates/put-client-certificate.mdx#syntax)
+
+
+## Put client certificate example
+
+
+
+{`byte[] rawCert = Files.readAllBytes(Paths.get(""));
+String certificateAsBase64 = Base64.getEncoder().encodeToString(rawCert);
+
+store.maintenance().server().send(
+ new PutClientCertificateOperation(
+ "certificateName",
+ certificateAsBase64,
+ new HashMap<>(),
+ SecurityClearance.CLUSTER_ADMIN));
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`public PutClientCertificateOperation(String name,
+ String certificate,
+ Map permissions,
+ SecurityClearance clearance)
+`}
+
+
+
+| Parameter | Type | Description |
+|-----------------|-------------------------------|------------------------------------------------------------------------------------------------------|
+| **name** | `String` | A name for the certificate. |
+| **certificate** | `String` | The certificate to register. |
+| **permissions** | `Map` | A dictionary mapping database name to access level. Relevant only when clearance is `VALID_USER`. |
+| **clearance** | `SecurityClearance` | Access level (role) assigned to the certificate. |
+
+
+
+{`public enum SecurityClearance \{
+ CLUSTER_ADMIN,
+ CLUSTER_NODE,
+ OPERATOR,
+ VALID_USER
+\}
+`}
+
+
+
+
+{`public enum DatabaseAccess \{
+ READ,
+ READ_WRITE,
+ ADMIN
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/_put-client-certificate-nodejs.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/_put-client-certificate-nodejs.mdx
new file mode 100644
index 0000000000..072036c0ff
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/_put-client-certificate-nodejs.mdx
@@ -0,0 +1,68 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Use `PutClientCertificateOperation` to register an existing client certificate.
+
+* To register an existing client certificate from the Studio,
+ see [Upload an existing client certificate](../../../../studio/server/certificates/server-management-certificates-view.mdx#upload-an-existing-client-certificate).
+
+* In this article:
+ * [Put client certificate example](../../../../client-api/operations/server-wide/certificates/put-client-certificate.mdx#put-client-certificate-example)
+ * [Syntax](../../../../client-api/operations/server-wide/certificates/put-client-certificate.mdx#syntax)
+
+
+## Put client certificate example
+
+
+
+{`const rawCert = fs.readFileSync("");
+const certificateAsBase64 = rawCert.toString("base64");
+
+const putClientCertificateOp = new PutClientCertificateOperation(
+ "certificateName",
+ certificateAsBase64,
+ \{\},
+ "ClusterAdmin");
+
+await store.maintenance.server.send(putClientCertificateOp);
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`const putOperation =
+ new PutClientCertificateOperation(name, certificate, permissions, clearance);
+`}
+
+
+
+| Parameter | Type | Description |
+|-----------------|----------------------------------|-----------------------------------------------------------------------------------------------------|
+| **name** | `string` | A name for the certificate. |
+| **certificate** | `string` | The certificate to register. |
+| **permissions** | `Record` | A dictionary mapping database name to access level. Relevant only when clearance is `ValidUser`. |
+| **clearance** | `SecurityClearance` | Access level (role) assigned to the certificate. |
+
+* `SecurityClearance` options:
+ * `ClusterAdmin`
+ * `ClusterNode`
+ * `Operator`
+ * `ValidUser`
+
+* `DatabaseAccess ` options:
+ * `Read`
+ * `ReadWrite`
+ * `Admin`
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/create-client-certificate.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/create-client-certificate.mdx
new file mode 100644
index 0000000000..d19dab74ce
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/create-client-certificate.mdx
@@ -0,0 +1,42 @@
+---
+title: "Operations: Server: How to Generate a Client Certificate"
+hide_table_of_contents: true
+sidebar_label: Create Client Certificate
+sidebar_position: 0
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import CreateClientCertificateCsharp from './_create-client-certificate-csharp.mdx';
+import CreateClientCertificateJava from './_create-client-certificate-java.mdx';
+import CreateClientCertificateNodejs from './_create-client-certificate-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/delete-certificate.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/delete-certificate.mdx
new file mode 100644
index 0000000000..c1cccb2259
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/delete-certificate.mdx
@@ -0,0 +1,35 @@
+---
+title: "Operations: Server: How to Delete a Client Certificate"
+hide_table_of_contents: true
+sidebar_label: Delete Certificate
+sidebar_position: 3
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import DeleteCertificateCsharp from './_delete-certificate-csharp.mdx';
+import DeleteCertificateJava from './_delete-certificate-java.mdx';
+import DeleteCertificateNodejs from './_delete-certificate-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/get-certificate.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/get-certificate.mdx
new file mode 100644
index 0000000000..570bb84a57
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/get-certificate.mdx
@@ -0,0 +1,29 @@
+---
+title: "Operations: Server: How to Get a Certificate"
+hide_table_of_contents: true
+sidebar_label: Get Certificate
+sidebar_position: 1
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import GetCertificateCsharp from './_get-certificate-csharp.mdx';
+import GetCertificateJava from './_get-certificate-java.mdx';
+
+export const supportedLanguages = ["csharp", "java"];
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/get-certificates.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/get-certificates.mdx
new file mode 100644
index 0000000000..0754296533
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/get-certificates.mdx
@@ -0,0 +1,29 @@
+---
+title: "Operations: Server: How to Get Certificates"
+hide_table_of_contents: true
+sidebar_label: Get Certificates
+sidebar_position: 2
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import GetCertificatesCsharp from './_get-certificates-csharp.mdx';
+import GetCertificatesJava from './_get-certificates-java.mdx';
+
+export const supportedLanguages = ["csharp", "java"];
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/put-client-certificate.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/put-client-certificate.mdx
new file mode 100644
index 0000000000..22d405b2ff
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/certificates/put-client-certificate.mdx
@@ -0,0 +1,36 @@
+---
+title: "Put Client Certificate Operation"
+hide_table_of_contents: true
+sidebar_label: Put Client Certificate
+sidebar_position: 4
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import PutClientCertificateCsharp from './_put-client-certificate-csharp.mdx';
+import PutClientCertificateJava from './_put-client-certificate-java.mdx';
+import PutClientCertificateNodejs from './_put-client-certificate-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/compact-database.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/compact-database.mdx
new file mode 100644
index 0000000000..62646c3d4c
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/compact-database.mdx
@@ -0,0 +1,51 @@
+---
+title: "Compact Database Operation"
+hide_table_of_contents: true
+sidebar_label: Compact Database
+sidebar_position: 1
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import CompactDatabaseCsharp from './_compact-database-csharp.mdx';
+import CompactDatabasePython from './_compact-database-python.mdx';
+import CompactDatabasePhp from './_compact-database-php.mdx';
+import CompactDatabaseNodejs from './_compact-database-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "python", "php", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/configuration/_category_.json b/versioned_docs/version-7.1/client-api/operations/server-wide/configuration/_category_.json
new file mode 100644
index 0000000000..a20298e082
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/configuration/_category_.json
@@ -0,0 +1,4 @@
+{
+ "position": 13,
+ "label": Configuration,
+}
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/configuration/_get-serverwide-client-configuration-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/configuration/_get-serverwide-client-configuration-csharp.mdx
new file mode 100644
index 0000000000..28628dfc5e
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/configuration/_get-serverwide-client-configuration-csharp.mdx
@@ -0,0 +1,60 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* It is recommended to first refer to the [put server-wide client-configuration](../../../../client-api/operations/server-wide/configuration/put-serverwide-client-configuration.mdx) article for general knowledge.
+
+* Use `GetServerWideClientConfigurationOperation` to get the current server-wide Client-Configuration set on the server.
+
+* In this page:
+ * [Get client-configuration](../../../../client-api/operations/maintenance/configuration/get-client-configuration.mdx#get-client-configuration)
+ * [Syntax](../../../../client-api/operations/maintenance/configuration/get-client-configuration.mdx#syntax)
+
+
+## Get client-configuration
+
+
+
+
+{`// Define the get client-configuration operation
+var getServerWideClientConfigOp = new GetServerWideClientConfigurationOperation();
+
+// Execute the operation by passing it to Maintenance.Server.Send
+ClientConfiguration config = store.Maintenance.Server.Send(getServerWideClientConfigOp);
+`}
+
+
+
+
+{`// Define the get client-configuration operation
+var getServerWideClientConfigOp = new GetServerWideClientConfigurationOperation();
+
+// Execute the operation by passing it to Maintenance.Server.SendAsync
+ClientConfiguration config =
+ await store.Maintenance.Server.SendAsync(getServerWideClientConfigOp);
+`}
+
+
+
+
+
+
+## Syntax
+
+
+
+{`public GetServerWideClientConfigurationOperation()
+`}
+
+
+
+| Return Value | |
+|-----------------------|------------------------------------------------|
+| `ClientConfiguration` | Configuration which will be used by the Client |
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/configuration/_get-serverwide-client-configuration-nodejs.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/configuration/_get-serverwide-client-configuration-nodejs.mdx
new file mode 100644
index 0000000000..5c535a88c2
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/configuration/_get-serverwide-client-configuration-nodejs.mdx
@@ -0,0 +1,59 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* It is recommended to first refer to the [put server-wide client-configuration](../../../../client-api/operations/server-wide/configuration/put-serverwide-client-configuration.mdx) article for general knowledge.
+
+* Use `GetServerWideClientConfigurationOperation` to get the current server-wide Client-Configuration set on the server.
+
+* In this page:
+ * [Get client-configuration](../../../../client-api/operations/maintenance/configuration/get-client-configuration.mdx#get-client-configuration)
+ * [Syntax](../../../../client-api/operations/maintenance/configuration/get-client-configuration.mdx#syntax)
+
+
+## Get client-configuration
+
+
+
+{`// Define the get client-configuration operation
+const getServerWideClientConfigOp = new GetServerWideClientConfigurationOperation();
+
+// Execute the operation by passing it to maintenance.server.send
+const config = await documentStore.maintenance.server.send(getServerWideClientConfigOp);
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`const getServerWideClientConfigOp = new GetServerWideClientConfigurationOperation();
+`}
+
+
+
+
+
+{`// Executing the operation returns the client-configuration object:
+\{
+ identityPartsSeparator,
+ etag,
+ disabled,
+ maxNumberOfRequestsPerSession,
+ readBalanceBehavior,
+ loadBalanceBehavior,
+ loadBalancerContextSeed
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/configuration/_put-serverwide-client-configuration-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/configuration/_put-serverwide-client-configuration-csharp.mdx
new file mode 100644
index 0000000000..663e7a7231
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/configuration/_put-serverwide-client-configuration-csharp.mdx
@@ -0,0 +1,104 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* The server-wide Client-Configuration is a set of configuration options that are set __on the server__ and apply to any client when communicating with __any__ database in the cluster.
+ See the available configuration options in the article about [put client-configuration for database](../../../../client-api/operations/maintenance/configuration/put-client-configuration.mdx#what-can-be-configured).
+
+* To set the server-wide Client-Configuration on the server:
+
+ * Use `PutServerWideClientConfigurationOperation` from the client code.
+ See the example below.
+
+ * Or, set the server-wide Client-Configuration from the Studio [Client-Configuration view](../../../../studio/server/client-configuration.mdx).
+
+* A Client-Configuration that is set on the server __overrides__ the initial Client-Configuration that is set on the client when creating the Document Store.
+ A Client-Configuration that is set on the server for the [database level](../../../../client-api/operations/maintenance/configuration/put-client-configuration.mdx)
+ will __override__ the server-wide Client-Configuration for that database.
+
+* Once the Client-Configuration is modified on the server, the running client will [receive the updated settings](../../../../client-api/configuration/load-balance/overview.mdx#keeping-the-client-topology-up-to-date)
+ the next time it makes a request to the database.
+* In this page:
+ * [Put client-configuration (server-wide)](../../../../client-api/operations/server-wide/configuration/put-serverwide-client-configuration.mdx#put-client-configuration-(server-wide))
+ * [Syntax](../../../../client-api/operations/server-wide/configuration/put-serverwide-client-configuration.mdx#syntax)
+
+
+## Put client-configuration (server-wide)
+
+
+
+
+{`// Define the client-configuration object
+ClientConfiguration clientConfiguration = new ClientConfiguration
+{
+ MaxNumberOfRequestsPerSession = 100,
+ ReadBalanceBehavior = ReadBalanceBehavior.FastestNode
+ // ...
+};
+
+// Define the put server-wide client-configuration operation, pass the configuration
+var putServerWideClientConfigOp = new PutServerWideClientConfigurationOperation(clientConfiguration);
+
+// Execute the operation by passing it to Maintenance.Server.Send
+store.Maintenance.Server.Send(putServerWideClientConfigOp);
+`}
+
+
+
+
+{`// Define the client-configuration object
+ClientConfiguration clientConfiguration = new ClientConfiguration
+{
+ MaxNumberOfRequestsPerSession = 100,
+ ReadBalanceBehavior = ReadBalanceBehavior.FastestNode
+ // ...
+};
+
+// Define the put server-wide client-configuration operation, pass the configuration
+var putServerWideClientConfigOp = new PutServerWideClientConfigurationOperation(clientConfiguration);
+
+// Execute the operation by passing it to Maintenance.Server.SendAsync
+tore.Maintenance.Server.SendAsync(putServerWideClientConfigOp);
+`}
+
+
+
+
+
+
+## Syntax
+
+
+
+{`public PutServerWideClientConfigurationOperation(ClientConfiguration configuration)
+`}
+
+
+
+| Parameter | Type | Description |
+|-------------------|-----------------------|-----------------------------------------------------------------------------------------|
+| __configuration__ | `ClientConfiguration` | Client configuration that will be set on the server (server-wide, for all databases) |
+
+
+
+{`public class ClientConfiguration
+\{
+ private char? _identityPartsSeparator;
+ public long Etag \{ get; set; \}
+ public bool Disabled \{ get; set; \}
+ public int? MaxNumberOfRequestsPerSession \{ get; set; \}
+ public ReadBalanceBehavior? ReadBalanceBehavior \{ get; set; \}
+ public LoadBalanceBehavior? LoadBalanceBehavior \{ get; set; \}
+ public int? LoadBalancerContextSeed \{ get; set; \}
+ public char? IdentityPartsSeparator; // can be any character except '|'
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/configuration/_put-serverwide-client-configuration-nodejs.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/configuration/_put-serverwide-client-configuration-nodejs.mdx
new file mode 100644
index 0000000000..196d5546c7
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/configuration/_put-serverwide-client-configuration-nodejs.mdx
@@ -0,0 +1,83 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* The server-wide Client-Configuration is a set of configuration options that are set __on the server__ and apply to any client when communicating with __any__ database in the cluster.
+ See the available configuration options in the article about [put client-configuration for database](../../../../client-api/operations/maintenance/configuration/put-client-configuration.mdx#what-can-be-configured).
+
+* To set the server-wide Client-Configuration on the server:
+
+ * Use `PutServerWideClientConfigurationOperation` from the client code.
+ See the example below.
+
+ * Or, set the server-wide Client-Configuration from the Studio [Client-Configuration view](../../../../studio/server/client-configuration.mdx).
+
+* A Client-Configuration that is set on the server __overrides__ the initial Client-Configuration that is set on the client when creating the Document Store.
+ A Client-Configuration that is set on the server for the [database level](../../../../client-api/operations/maintenance/configuration/put-client-configuration.mdx)
+ will __override__ the server-wide Client-Configuration for that database.
+
+* Once the Client-Configuration is modified on the server, the running client will [receive the updated settings](../../../../client-api/configuration/load-balance/overview.mdx#keeping-the-client-topology-up-to-date)
+ the next time it makes a request to the database.
+* In this page:
+ * [Put client-configuration (server-wide)](../../../../client-api/operations/server-wide/configuration/put-serverwide-client-configuration.mdx#put-client-configuration-(server-wide))
+ * [Syntax](../../../../client-api/operations/server-wide/configuration/put-serverwide-client-configuration.mdx#syntax)
+
+
+## Put client-configuration (server-wide)
+
+
+
+{`// Define the client-configuration object
+const clientConfiguration = \{
+ maxNumberOfRequestsPerSession: 200,
+ readBalanceBehavior: "FastestNode",
+ // ...
+\};
+
+// Define the put server-wide client-configuration operation, pass the configuration
+const putServerWideClientConfigOp =
+ new PutServerWideClientConfigurationOperation(clientConfiguration);
+
+// Execute the operation by passing it to maintenance.server.send
+await documentStore.maintenance.server.send(putServerWideClientConfigOp);
+`}
+
+
+
+
+
+## Syntax
+
+
+
+{`const putServerWideClientConfigOp = new PutServerWideClientConfigurationOperation(configuration);
+`}
+
+
+
+| Parameter | Type | Description |
+|-------------------|----------|-----------------------------------------------------------------------------------------|
+| __configuration__ | `object` | Client configuration that will be set on the server (server-wide, for all databases) |
+
+
+
+{`// The client-configuration object
+\{
+ identityPartsSeparator,
+ etag,
+ disabled,
+ maxNumberOfRequestsPerSession,
+ readBalanceBehavior,
+ loadBalanceBehavior,
+ loadBalancerContextSeed
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/configuration/get-serverwide-client-configuration.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/configuration/get-serverwide-client-configuration.mdx
new file mode 100644
index 0000000000..b3cd75e182
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/configuration/get-serverwide-client-configuration.mdx
@@ -0,0 +1,32 @@
+---
+title: "Get Client Configuration Operation (Server-Wide)"
+hide_table_of_contents: true
+sidebar_label: Get Server Wide Client Configuration
+sidebar_position: 0
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import GetServerwideClientConfigurationCsharp from './_get-serverwide-client-configuration-csharp.mdx';
+import GetServerwideClientConfigurationNodejs from './_get-serverwide-client-configuration-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/configuration/put-serverwide-client-configuration.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/configuration/put-serverwide-client-configuration.mdx
new file mode 100644
index 0000000000..747e96ebd3
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/configuration/put-serverwide-client-configuration.mdx
@@ -0,0 +1,37 @@
+---
+title: "Put Client Configuration Operation (Server-Wide)"
+hide_table_of_contents: true
+sidebar_label: Put Server Wide Client Configuration
+sidebar_position: 1
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import PutServerwideClientConfigurationCsharp from './_put-serverwide-client-configuration-csharp.mdx';
+import PutServerwideClientConfigurationNodejs from './_put-serverwide-client-configuration-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/create-database.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/create-database.mdx
new file mode 100644
index 0000000000..cd7c943779
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/create-database.mdx
@@ -0,0 +1,31 @@
+---
+title: "Create Database Operation"
+hide_table_of_contents: true
+sidebar_label: Create Database
+sidebar_position: 2
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import CreateDatabaseCsharp from './_create-database-csharp.mdx';
+import CreateDatabaseJava from './_create-database-java.mdx';
+
+export const supportedLanguages = ["csharp", "java"];
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/delete-database.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/delete-database.mdx
new file mode 100644
index 0000000000..3ca84bce26
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/delete-database.mdx
@@ -0,0 +1,29 @@
+---
+title: "Operations: Server: How to delete a database?"
+hide_table_of_contents: true
+sidebar_label: Delete Databases
+sidebar_position: 3
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import DeleteDatabaseCsharp from './_delete-database-csharp.mdx';
+import DeleteDatabaseJava from './_delete-database-java.mdx';
+
+export const supportedLanguages = ["csharp", "java"];
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/get-build-number.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/get-build-number.mdx
new file mode 100644
index 0000000000..e5267bff6b
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/get-build-number.mdx
@@ -0,0 +1,24 @@
+---
+title: "Operations: Server: How to Get Server Build Number"
+hide_table_of_contents: true
+sidebar_label: Get Build Number
+sidebar_position: 4
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import GetBuildNumberCsharp from './_get-build-number-csharp.mdx';
+
+export const supportedLanguages = ["csharp"];
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/get-database-names.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/get-database-names.mdx
new file mode 100644
index 0000000000..cd5bf5e7aa
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/get-database-names.mdx
@@ -0,0 +1,29 @@
+---
+title: "Operations: Server: How to Get the Names of Databases on a Server"
+hide_table_of_contents: true
+sidebar_label: Get Database Names
+sidebar_position: 5
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import GetDatabaseNamesCsharp from './_get-database-names-csharp.mdx';
+import GetDatabaseNamesJava from './_get-database-names-java.mdx';
+
+export const supportedLanguages = ["csharp", "java"];
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/logs/_category_.json b/versioned_docs/version-7.1/client-api/operations/server-wide/logs/_category_.json
new file mode 100644
index 0000000000..df73a11b53
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/logs/_category_.json
@@ -0,0 +1,4 @@
+{
+ "position": 14,
+ "label": Logs,
+}
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/logs/_get-logs-configuration-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/logs/_get-logs-configuration-csharp.mdx
new file mode 100644
index 0000000000..02e84b13b8
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/logs/_get-logs-configuration-csharp.mdx
@@ -0,0 +1,66 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+To get the server logs configuration, use **GetLogsConfigurationOperation** from `Maintenance.Server`
+
+## Syntax
+
+
+
+{`public GetLogsConfigurationOperation()
+`}
+
+
+
+### Return Value
+
+The result of executing GetLogsConfigurationOperation is a **GetLogsConfigurationResult** object:
+
+
+
+{`public class GetLogsConfigurationResult
+\{
+ public LogMode CurrentMode \{ get; set; \}
+
+ public LogMode Mode \{ get; set; \}
+
+ public string Path \{ get; set; \}
+
+ public bool UseUtcTime \{ get; set; \}
+\}
+`}
+
+
+
+| Property | Description |
+|-----------------|----------------------------------------------------------------------------------------------|
+| **CurrentMode** | Current mode that is active |
+| **Mode** | Mode that is written in the configuration file and which will be used after a server restart |
+| **Path** | Path to which logs will be written |
+| **UseUtcTime** | Indicates if logs will be written in UTC or in server local time |
+
+## Example
+
+
+
+
+{`GetLogsConfigurationResult logsConfiguration = store
+ .Maintenance
+ .Server
+ .Send(new GetLogsConfigurationOperation());
+`}
+
+
+
+
+{`GetLogsConfigurationResult logsConfiguration = await store
+ .Maintenance
+ .Server
+ .SendAsync(new GetLogsConfigurationOperation());
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/logs/_set-logs-configuration-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/logs/_set-logs-configuration-csharp.mdx
new file mode 100644
index 0000000000..16a240eab4
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/logs/_set-logs-configuration-csharp.mdx
@@ -0,0 +1,61 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+To set the server logs configuration, use **SetLogsConfigurationOperation** from `Maintenance.Server`. The server logs configuration is not persisted and will get back to the original value after server restart.
+
+## Syntax
+
+
+
+{`public SetLogsConfigurationOperation(Parameters parameters)
+`}
+
+
+
+
+
+{`public class Parameters
+\{
+ public LogMode Mode \{ get; set; \}
+\}
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **Mode** | `LogMode` | Logging mode (level) to be set |
+
+## Example
+
+
+
+
+{`store
+ .Maintenance
+ .Server
+ .Send(new SetLogsConfigurationOperation(
+ new SetLogsConfigurationOperation.Parameters
+ {
+ Mode = LogMode.Information
+ }));
+`}
+
+
+
+
+{`await store
+ .Maintenance
+ .Server
+ .SendAsync(new SetLogsConfigurationOperation(
+ new SetLogsConfigurationOperation.Parameters
+ {
+ Mode = LogMode.Information
+ }));
+`}
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/logs/get-logs-configuration.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/logs/get-logs-configuration.mdx
new file mode 100644
index 0000000000..f4971d4e44
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/logs/get-logs-configuration.mdx
@@ -0,0 +1,24 @@
+---
+title: "Operations: Server: How to Get Logs Configuration"
+hide_table_of_contents: true
+sidebar_label: Get Logs Configuration
+sidebar_position: 0
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import GetLogsConfigurationCsharp from './_get-logs-configuration-csharp.mdx';
+
+export const supportedLanguages = ["csharp"];
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/logs/set-logs-configuration.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/logs/set-logs-configuration.mdx
new file mode 100644
index 0000000000..5b98c76fdb
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/logs/set-logs-configuration.mdx
@@ -0,0 +1,24 @@
+---
+title: "Operations: Server: How to Get Logs Configuration"
+hide_table_of_contents: true
+sidebar_label: Set Logs Configuration
+sidebar_position: 1
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import SetLogsConfigurationCsharp from './_set-logs-configuration-csharp.mdx';
+
+export const supportedLanguages = ["csharp"];
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/modify-conflict-solver.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/modify-conflict-solver.mdx
new file mode 100644
index 0000000000..b768cf3042
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/modify-conflict-solver.mdx
@@ -0,0 +1,24 @@
+---
+title: "Operations: Server: How to Modify a Conflict Solver"
+hide_table_of_contents: true
+sidebar_label: Modify Conflict Solver
+sidebar_position: 6
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import ModifyConflictSolverCsharp from './_modify-conflict-solver-csharp.mdx';
+
+export const supportedLanguages = ["csharp"];
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/promote-database-node.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/promote-database-node.mdx
new file mode 100644
index 0000000000..4be72dd5ea
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/promote-database-node.mdx
@@ -0,0 +1,25 @@
+---
+title: "Operations: Server: How to Promote a Database Node?"
+hide_table_of_contents: true
+sidebar_label: Promote Database Node
+sidebar_position: 7
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import PromoteDatabaseNodeCsharp from './_promote-database-node-csharp.mdx';
+
+export const supportedLanguages = ["csharp"];
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/reorder-database-members.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/reorder-database-members.mdx
new file mode 100644
index 0000000000..041ba04ef1
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/reorder-database-members.mdx
@@ -0,0 +1,24 @@
+---
+title: "Operations: Server: How to reoder database members?"
+hide_table_of_contents: true
+sidebar_label: Reorder Database Members
+sidebar_position: 10
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import ReorderDatabaseMembersCsharp from './_reorder-database-members-csharp.mdx';
+
+export const supportedLanguages = ["csharp"];
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/restore-backup.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/restore-backup.mdx
new file mode 100644
index 0000000000..a57aa82308
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/restore-backup.mdx
@@ -0,0 +1,51 @@
+---
+title: "Operations: Server: How to Restore a Database from the Backup"
+hide_table_of_contents: true
+sidebar_label: Restore Backup
+sidebar_position: 8
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import RestoreBackupCsharp from './_restore-backup-csharp.mdx';
+import RestoreBackupJava from './_restore-backup-java.mdx';
+import RestoreBackupNodejs from './_restore-backup-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/sorters/_category_.json b/versioned_docs/version-7.1/client-api/operations/server-wide/sorters/_category_.json
new file mode 100644
index 0000000000..cae272e9eb
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/sorters/_category_.json
@@ -0,0 +1,4 @@
+{
+ "position": 15,
+ "label": Sorters,
+}
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/sorters/_put-sorter-server-wide-csharp.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/sorters/_put-sorter-server-wide-csharp.mdx
new file mode 100644
index 0000000000..4e50c5c433
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/sorters/_put-sorter-server-wide-csharp.mdx
@@ -0,0 +1,115 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* The Lucene indexing engine allows you to create your own __Custom Sorters__
+ where you can define how query results will be ordered based on your specific requirements.
+
+* Use `PutServerWideSortersOperation` to deploy a custom sorter to the RavenDB server.
+ Once deployed, you can use it to sort query results for all queries made on __all databases__ in your cluster.
+
+* To deploy a custom sorter that will apply only to the database scoped to your [Document Store](../../../../client-api/setting-up-default-database.mdx),
+ see [put custom sorter](../../../../client-api/operations/maintenance/sorters/put-sorter.mdx).
+
+* A custom sorter can also be uploaded server-wide from the [Studio](../../../../studio/database/settings/custom-sorters.mdx).
+
+* In this page:
+ * [Put custom sorter server-wide](../../../../client-api/operations/server-wide/sorters/put-sorter-server-wide.mdx#put-custom-sorter-server-wide)
+ * [Syntax](../../../../client-api/operations/server-wide/sorters/put-sorter-server-wide.mdx#syntax)
+
+
+## Put custom sorter server-wide
+
+* First, create your own sorter class that inherits from the Lucene class [Lucene.Net.Search.FieldComparator](https://lucenenet.apache.org/docs/3.0.3/df/d91/class_lucene_1_1_net_1_1_search_1_1_field_comparator.html).
+
+* Then, send the custom sorter to the server using the `PutServerWideSortersOperation`.
+
+
+
+
+{`// Assign the code of your custom sorter as a \`string\`
+string mySorterCode = "";
+
+// Create the \`SorterDefinition\` object
+var customSorterDefinition = new SorterDefinition
+{
+ // The sorter Name must be the same as the sorter's class name in your code
+ Name = "MySorter",
+ // The Code must be compilable and include all necessary using statements
+ Code = mySorterCode
+};
+
+// Define the put sorters operation, pass the sorter definition
+// Note: multiple sorters can be passed, see syntax below
+var putSortersServerWideOp = new PutServerWideSortersOperation(customSorterDefinition);
+
+// Execute the operation by passing it to Maintenance.Server.Send
+store.Maintenance.Server.Send(putSortersServerWideOp);
+`}
+
+
+
+
+{`// Assign the code of your custom sorter as a \`string\`
+string mySorterCode = "";
+
+// Create the \`SorterDefinition\` object
+var customSorterDefinition = new SorterDefinition
+{
+ // The sorter Name must be the same as the sorter's class name in your code
+ Name = "MySorter",
+ // The Code must be compilable and include all necessary using statements
+ Code = mySorterCode
+};
+
+// Define the put sorters operation, pass the sorter definition
+// Note: multiple sorters can be passed, see syntax below
+var putSortersServerWideOp = new PutServerWideSortersOperation(customSorterDefinition);
+
+// Execute the operation by passing it to Maintenance.Server.SendAsync
+await store.Maintenance.Server.SendAsync(putSortersServerWideOp);
+`}
+
+
+
+
+
+
+You can now order your query results using the custom sorter.
+A query example is available [here](../../../../client-api/session/querying/sort-query-results.mdx#custom-sorters).
+
+
+
+
+
+## Syntax
+
+
+
+{`public PutServerWideSortersOperation(params SorterDefinition[] sortersToAdd)
+`}
+
+
+
+| Parameter | Type | Description |
+|-------------------|----------------------|------------------------------------------------------|
+| __sortersToAdd__ | `SorterDefinition[]` | One or more Sorter Definitions to send to the server |
+
+
+
+
+{`public class SorterDefinition
+\{
+ public string Name \{ get; set; \}
+ public string Code \{ get; set; \}
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/sorters/_put-sorter-server-wide-nodejs.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/sorters/_put-sorter-server-wide-nodejs.mdx
new file mode 100644
index 0000000000..fc32e216ce
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/sorters/_put-sorter-server-wide-nodejs.mdx
@@ -0,0 +1,85 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* The Lucene indexing engine allows you to create your own __Custom Sorters__
+ where you can define how query results will be ordered based on your specific requirements.
+
+* Use `PutServerWideSortersOperation` to deploy a custom sorter to the RavenDB server.
+ Once deployed, you can use it to sort query results for all queries made on __all databases__ in your cluster.
+
+* To deploy a custom sorter that will apply only to the database scoped to your [Document Store](../../../../client-api/setting-up-default-database.mdx),
+ see [put custom sorter](../../../../client-api/operations/maintenance/sorters/put-sorter.mdx).
+
+* A custom sorter can also be uploaded server-wide from the [Studio](../../../../studio/database/settings/custom-sorters.mdx).
+
+* In this page:
+ * [Put custom sorter server-wide](../../../../client-api/operations/server-wide/sorters/put-sorter-server-wide.mdx#put-custom-sorter-server-wide)
+ * [Syntax](../../../../client-api/operations/server-wide/sorters/put-sorter-server-wide.mdx#syntax)
+
+
+## Put custom sorter server-wide
+
+* First, create your own sorter class that inherits from the Lucene class [Lucene.Net.Search.FieldComparator](https://lucenenet.apache.org/docs/3.0.3/df/d91/class_lucene_1_1_net_1_1_search_1_1_field_comparator.html).
+
+* Then, send the custom sorter to the server using the `PutServerWideSortersOperation`.
+
+
+
+{`// Create the sorter definition object
+const sorterDefinition = \{
+ // The sorter name must be the same as the sorter's class name in your code
+ name: "MySorter",
+ // The code must be compilable and include all necessary using statements (C# code)
+ code: ""
+\};
+
+// Define the put sorters operation, pass the sorter definition
+const putSortersServerWideOp = new PutServerWideSortersOperation(sorterDefinition);
+
+// Execute the operation by passing it to maintenance.server.send
+await documentStore.maintenance.server.send(putSortersServerWideOp );
+`}
+
+
+
+
+
+You can now order your query results using the custom sorter.
+A query example is available [here](../../../../client-api/session/querying/sort-query-results.mdx#custom-sorters).
+
+
+
+
+
+## Syntax
+
+
+
+{`const putSortersServerWideOp = new PutServerWideSortersOperation(sortersToAdd);
+`}
+
+
+
+| Parameter | Type | Description |
+|-------------------|---------------|------------------------------------------------------|
+| __sortersToAdd__ | `...object[]` | One or more Sorter Definitions to send to the server |
+
+
+
+
+{`// The sorter definition object
+\{
+ name: string;
+ code: string;
+\}
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/sorters/put-sorter-server-wide.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/sorters/put-sorter-server-wide.mdx
new file mode 100644
index 0000000000..edaf3d3879
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/sorters/put-sorter-server-wide.mdx
@@ -0,0 +1,39 @@
+---
+title: "Put Custom Sorter (Server-Wide) Operation"
+hide_table_of_contents: true
+sidebar_label: Put Custom Sorter
+sidebar_position: 0
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import PutSorterServerWideCsharp from './_put-sorter-server-wide-csharp.mdx';
+import PutSorterServerWideNodejs from './_put-sorter-server-wide-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/toggle-databases-state.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/toggle-databases-state.mdx
new file mode 100644
index 0000000000..2866753094
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/toggle-databases-state.mdx
@@ -0,0 +1,45 @@
+---
+title: "Toggle Databases State Operation (Enable / Disable)"
+hide_table_of_contents: true
+sidebar_label: Toggle Databases State
+sidebar_position: 9
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import ToggleDatabasesStateCsharp from './_toggle-databases-state-csharp.mdx';
+import ToggleDatabasesStatePython from './_toggle-databases-state-python.mdx';
+import ToggleDatabasesStatePhp from './_toggle-databases-state-php.mdx';
+import ToggleDatabasesStateNodejs from './_toggle-databases-state-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "python", "php", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/server-wide/toggle-dynamic-database-distribution.mdx b/versioned_docs/version-7.1/client-api/operations/server-wide/toggle-dynamic-database-distribution.mdx
new file mode 100644
index 0000000000..2f994e707a
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/server-wide/toggle-dynamic-database-distribution.mdx
@@ -0,0 +1,30 @@
+---
+title: "Operations: Server: Toggle Dynamic Database Distribution"
+hide_table_of_contents: true
+sidebar_label: Toggle Dynamic Database Distribution
+sidebar_position: 11
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import ToggleDynamicDatabaseDistributionCsharp from './_toggle-dynamic-database-distribution-csharp.mdx';
+
+export const supportedLanguages = ["csharp"];
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/operations/what-are-operations.mdx b/versioned_docs/version-7.1/client-api/operations/what-are-operations.mdx
new file mode 100644
index 0000000000..5f9223a814
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/operations/what-are-operations.mdx
@@ -0,0 +1,51 @@
+---
+title: "What are Operations"
+hide_table_of_contents: true
+sidebar_label: What are Operations
+sidebar_position: 0
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import WhatAreOperationsCsharp from './_what-are-operations-csharp.mdx';
+import WhatAreOperationsJava from './_what-are-operations-java.mdx';
+import WhatAreOperationsPython from './_what-are-operations-python.mdx';
+import WhatAreOperationsPhp from './_what-are-operations-php.mdx';
+import WhatAreOperationsNodejs from './_what-are-operations-nodejs.mdx';
+
+export const supportedLanguages = ["csharp", "java", "python", "php", "nodejs"];
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/rest-api/_category_.json b/versioned_docs/version-7.1/client-api/rest-api/_category_.json
new file mode 100644
index 0000000000..506cea47c7
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/rest-api/_category_.json
@@ -0,0 +1,4 @@
+{
+ "position": 19,
+ "label": REST API,
+}
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/rest-api/document-commands/_category_.json b/versioned_docs/version-7.1/client-api/rest-api/document-commands/_category_.json
new file mode 100644
index 0000000000..65668e24f1
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/rest-api/document-commands/_category_.json
@@ -0,0 +1,4 @@
+{
+ "position": 1,
+ "label": Document Commands,
+}
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/rest-api/document-commands/batch-commands.mdx b/versioned_docs/version-7.1/client-api/rest-api/document-commands/batch-commands.mdx
new file mode 100644
index 0000000000..8079cd8eee
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/rest-api/document-commands/batch-commands.mdx
@@ -0,0 +1,923 @@
+---
+title: "Batch Commands"
+hide_table_of_contents: true
+sidebar_label: Batch Commands
+sidebar_position: 5
+---
+
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+# Batch Commands
+
+
+* Use this endpoint with the **`POST`** method to send multiple commands in one request:
+`/databases//bulk_docs`
+
+* The commands are sent as a JSON array in the [request body](../../../client-api/rest-api/document-commands/batch-commands.mdx#body).
+
+* All the commands in the batch will either succeed or fail as a **single transaction**. Changes will not be visible until
+the entire batch completes.
+
+* [Options](../../../client-api/rest-api/document-commands/batch-commands.mdx#batch-options) can be set to make the server wait
+for indexing and replication to complete before returning.
+
+* In this page:
+ * [Basic Example](../../../client-api/rest-api/document-commands/batch-commands.mdx#basic-example)
+ * [Request Format](../../../client-api/rest-api/document-commands/batch-commands.mdx#request-format)
+ * [Commands](../../../client-api/rest-api/document-commands/batch-commands.mdx#commands)
+ * [Response Format](../../../client-api/rest-api/document-commands/batch-commands.mdx#response-format)
+ * [More Examples](../../../client-api/rest-api/document-commands/batch-commands.mdx#more-examples)
+
+## Basic Example
+
+This is a cURL request to a database named "Example" on our [playground server](http://live-test.ravendb.net).
+It batches two commands:
+
+1. Upload a new document called "person/1".
+2. Execute a [patch](../../../client-api/operations/patching/single-document.mdx) on that same document.
+
+
+
+{`curl -X POST "http://live-test.ravendb.net/databases/Example/bulk_docs"
+-H "Content-Type: application/json"
+-d "\{
+ \\"Commands\\": [
+ \{
+ \\"Id\\": \\"person/1\\",
+ \\"ChangeVector\\": null,
+ \\"Document\\": \{
+ \\"Name\\": \\"John Smith\\"
+ \},
+ \\"Type\\": \\"PUT\\"
+ \},
+ \{
+ \\"Id\\": \\"person/1\\",
+ \\"ChangeVector\\": null,
+ \\"Patch\\": \{
+ \\"Script\\": \\"this.Name = 'Jane Doe';\\",
+ \\"Values\\": \{\}
+ \},
+ \\"Type\\": \\"PATCH\\"
+ \}
+ ]
+\}"
+`}
+
+
+Linebreaks are added for clarity.
+
+
+#### Response:
+
+
+
+{`HTTP/1.1 201 Created
+Server: nginx
+Date: Sun, 15 Sep 2019 14:12:30 GMT
+Content-Type: application/json; charset=utf-8
+Transfer-Encoding: chunked
+Connection: keep-alive
+Content-Encoding: gzip
+Vary: Accept-Encoding
+Raven-Server-Version: 4.2.4.42
+
+\{
+ "Results": [
+ \{
+ "Type": "PUT",
+ "@id": "person/1",
+ "@collection": "@empty",
+ "@change-vector": "A:1-urx5nDNUT06FCpCon1wCyA",
+ "@last-modified": "2019-09-15T14:12:30.0425811"
+ \},
+ \{
+ "Id": "person/1",
+ "ChangeVector": "A:2-urx5nDNUT06FCpCon1wCyA",
+ "LastModified": "2019-09-15T14:12:30.0495095",
+ "Type": "PATCH",
+ "PatchStatus": "Patched",
+ "Debug": null
+ \}
+ ]
+\}
+`}
+
+
+
+
+
+## Request Format
+
+This is the general format of a cURL request with a batch of commands that _does not_ include a Put Attachment Command
+(see the format for batching a Put Attachment Command [below](../../../client-api/rest-api/document-commands/batch-commands.mdx#put-attachment-command)):
+
+
+
+{`curl -X POST "/databases//bulk_docs?"
+-H "Content-Type: application/json"
+-d "\{
+ \\"Commands\\": [
+ \{ \},
+ ...
+ ]
+\}"
+`}
+
+
+Linebreaks are added for clarity.
+
+
+#### Query String
+
+The query string takes [batch options](../../../client-api/rest-api/document-commands/batch-commands.mdx#batch-options), which
+can make the server wait for indexing and replication to finish before responding.
+
+
+#### Header
+
+The header `Content-Type` is required and takes one of two values:
+
+* `application/json` - if the batch _does not_ include a Put Attachment Command.
+* `multipart/mixed; boundary=` - if the batch [_does_](../../../client-api/rest-api/document-commands/batch-commands.mdx#put-attachment-command)
+include a Put Attachment Command. The "separator" is an arbitrary string used to demarcate the attachment streams and
+commands array.
+
+
+#### Body
+
+The body contains a JSON array of commands.
+
+
+
+{`-d "\{
+ \\"Commands\\": [
+ \{
+ \\"Id\\": \\"\\",
+ ...
+ \\"Type\\": \\"\\"
+ \},
+ \{ \},
+ ...
+ ]
+\}"
+`}
+
+
+Depending on the shell you're using to run cURL, you will probably need to escape all double quotes within the request body
+using a backslash: `"` -> `\"`.
+
+The following commands can be sent using the batch command:
+
+* [Put Document Command](../../../client-api/rest-api/document-commands/batch-commands.mdx#put-document-command)
+* [Patch Document Command](../../../client-api/rest-api/document-commands/batch-commands.mdx#patch-document-command)
+* [Delete Document Command](../../../client-api/rest-api/document-commands/batch-commands.mdx#delete-document-command)
+* [Delete by Prefix Command](../../../client-api/rest-api/document-commands/batch-commands.mdx#delete-by-prefix-command)
+* [Put Attachment Command](../../../client-api/rest-api/document-commands/batch-commands.mdx#put-attachment-command)
+* [Delete Attachment Command](../../../client-api/rest-api/document-commands/batch-commands.mdx#delete-attachment-command)
+### Batch Options
+
+These options, configured in the query string, make the server wait until indexing or replication have completed before responding. If these have not
+completed before a specified amount of time has passed, the server can either respond as normal or throw an exception.
+
+This is the general format of a cURL request that includes batch options in the query string:
+
+
+
+{`curl -X POST "/databases//bulk_docs?=
+ &=
+ &=
+ ..."
+-H "Content-Type: "
+-d "\{ \}"
+`}
+
+
+Linebreaks are added for clarity.
+
+#### Indexing Options
+
+| Query Parameter | Type | Description |
+|---------------------------|------------|----------------------------------------------------------------------------------------------------------------------------------------------|
+| **waitForIndexesTimeout** | `TimeSpan` | The amount of time to wait for indexing to complete. [Format of `TimeSpan`](https://docs.microsoft.com/en-us/dotnet/api/system.timespan). |
+| **waitForIndexThrow** | `boolean` | Set to `true` to throw an exception if the indexing doesn't complete before `waitForIndexesTimeout`. Set to `false` to receive the normal response body. |
+| **waitForSpecificIndex** | `string[]` | Wait only for the listed indexes to finish updating, rather than all indexes. |
+
+#### Replication Options
+
+| Query Parameter | Type | Description |
+|-------------------------------------|------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **waitForReplicasTimeout** | `TimeSpan` | The amount of time to wait for replication to complete. [Format of `TimeSpan`](https://docs.microsoft.com/en-us/dotnet/api/system.timespan). |
+| **throwOnTimeoutInWaitForReplicas** | `boolean` | Set to `true` to throw an exception if the replication doesn't complete before `waitForReplicasTimeout`. Set to `false` to receive the normal response body. |
+| **numberOfReplicasToWaitFor** | `int` / `string` | The number of replicas that should be made before `waitForReplicasTimeout`. Set this parameter to `majority` to wait until the data has been replicated to a majority of the nodes in the database group. Default = `1`. |
+
+## Commands
+
+### Put Document Command
+
+Upload a new document or update an existing document.
+
+Format within the `Commands` array in the [request body](../../../client-api/rest-api/document-commands/batch-commands.mdx#request-format):
+
+
+
+{`\{
+ \\"Id\\": \\"\\",
+ \\"ChangeVector\\": \\"\\",
+ \\"Document\\": \{
+
+ \},
+ \\"Type\\": \\"PUT\\",
+ \\"ForceRevisionCreationStrategy\\": \\"Before\\"
+\}
+`}
+
+
+
+| Parameter | Description | Required |
+|-------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|
+| **Id** | ID of document to create or update | Yes to update, [no to create](../../../client-api/document-identifiers/working-with-document-identifiers.mdx#autogenerated-ids) |
+| **ChangeVector** | When updating an existing document, this parameter that document's expected [change vector](../../../server/clustering/replication/change-vector.mdx). If it does not match the server-side change vector a concurrency exception is thrown. An exception is also thrown if the document does not exist. | No |
+| **Document** | JSON document to create, or to replace the existing document | Yes |
+| **Type** | Set to `PUT` | Yes |
+| **ForceRevisionCreationStrategy** | When updating an existing document, set to `Before` to make a [revision](../../../document-extensions/revisions/overview.mdx) of the document before it is updated. | No |
+
+### Patch Document Command
+
+Update a document. A [patch](../../../client-api/operations/patching/single-document.mdx) is executed on the server side and
+does not involve loading the document, avoiding the cost of sending the entire document in a round trip over the network.
+
+Format within the `Commands` array in the [request body](../../../client-api/rest-api/document-commands/batch-commands.mdx#request-format):
+
+
+
+{`\{
+ \\"Id\\": \\"\\",
+ \\"ChangeVector\\": \\"\\",
+ \\"Patch\\": \{
+ \\"Script\\": \\" >\\",
+ \\"Values\\": \{
+ \\"\\": \\"\\",
+ ...
+ \}
+ \},
+ \\"PatchIfMissing\\": \{
+ \\"Script\\": \\"\\",
+ \\"Values\\": \{
+
+ \}
+ \},
+ \\"Type\\": \\"PATCH\\"
+\}
+`}
+
+
+
+| Parameter | Description | Required |
+| - | - | - |
+| **Id** | ID of a document to execute the patch on | Yes |
+| **ChangeVector** | The document's expected [change vector](../../../server/clustering/replication/change-vector.mdx). If it does not match the server-side change vector a concurrency exception is thrown. | No |
+| **Patch** | A script that modifies the specified document. [Details below](../../../client-api/rest-api/document-commands/batch-commands.mdx#patch-request). | Yes |
+| **PatchIfMissing** | An alternative script to be executed if no document with the given ID is found. This will create a new document with the given ID. [Details below](../../../client-api/rest-api/document-commands/batch-commands.mdx#patch-request). | No |
+| **Type** | Set to `PATCH` | Yes |
+
+#### Patch Request
+
+Using scripts with arguments allows RavenDB to cache scripts and boost performance. For cURL, use single quotes `'` to
+wrap strings.
+
+| Sub-Parameter | Description | Required |
+| - | - | - |
+| **Script** | Javascript commands to perform on the document. Use arguments from `Values` with a `$` prefix, i.e. `$`. | Yes |
+| **Values** | Arguments that can be used in the script. | No |
+### Delete Document Command
+
+Delete a document by its ID.
+
+Format within the `Commands` array in the [request body](../../../client-api/rest-api/document-commands/batch-commands.mdx#request-format):
+
+
+
+{`\{
+ \\"Id\\": \\"\\",
+ \\"ChangeVector\\": \\"\\",
+ \\"Type\\": \\"DELETE\\"
+\}
+`}
+
+
+
+| Parameter | Description | Required |
+| - | - | - |
+| **Id** | ID of document to delete (only one can be deleted per command) | Yes |
+| **ChangeVector** | The document's expected [change vector](../../../server/clustering/replication/change-vector.mdx). If it does not match the server-side change vector a concurrency exception is thrown. | No |
+| **Type** | Set to `DELETE` | Yes |
+### Delete by Prefix Command
+
+Delete all documents whose IDs begin with a certain prefix.
+
+Format within the `Commands` array in the [request body](../../../client-api/rest-api/document-commands/batch-commands.mdx#request-format):
+
+
+
+{`\{
+ \\"Id\\": \\"\\",
+ \\"IdPrefixed\\": true,
+ \\"Type\\": \\"DELETE\\"
+\}
+`}
+
+
+
+| Parameter | Description | Required |
+| - | - | - |
+| **Id** | All documents whose IDs begin with this string will be deleted | Yes |
+| **IdPrefixed** | Set to `true` (distinguishes this as a Delete by Prefix Command rather than the Delete Document Command described above) | Yes |
+| **Type** | Set to `DELETE` | Yes |
+### Put Attachment Command
+
+Add an [attachment](../../../document-extensions/attachments/what-are-attachments.mdx) to a document, or update an existing attachment.
+
+If a batch contains a Put Attachment Command, the cURL format of the request is slightly different from a batch that doesn't.
+The `Content-Type` header takes `multipart/mixed; boundary=""` instead of the default `application/json`.
+The body contains the `Commands` array followed by each of the attachments, passed in the form of binary streams. The attachment streams come in the
+same order as their respective Put Attachment Commands within the `Commands` array. The `separator` demarcates these sections.
+
+The general form of a cURL request:
+
+
+
+{`curl -X POST "/databases//bulk_docs"
+-H "Content-Type: multipart/mixed; boundary="
+-d "
+--
+\{
+ \\"Commands\\":[
+ \{
+ \\"Id\\": \\"\\",
+ \\"Name\\": \\"\\",
+ \\"ContentType\\": \\"\\"
+ \\"ChangeVector\\": \\"\\",
+ \\"Type\\": \\"AttachmentPUT\\"
+ \},
+ ...
+ ]
+\}
+--
+Command-Type: AttachmentStream
+
+
+--
+...
+----"
+`}
+
+
+
+| Parameter | Description | Required |
+|------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|
+| **boundary** | The "separator" - an arbitrary string that demarcates the attachment streams. The attachment streams come in the same order as their respective Put Attachment Commands in the commands array. The string used as a separator must not appear elsewhere in the request body - i.e. "ChangeVector" or "{[" are not valid separators. | Yes |
+| **Id** | Document ID | Yes |
+| **Name** | Name of attachment to create or update | Yes |
+| **ContentType** | Mime type of the attachment | No |
+| **ChangeVector** | The document's expected [change vector](../../../server/clustering/replication/change-vector.mdx). If it does not match the server-side change vector a concurrency exception is thrown. | No |
+| **Type** | Set to `AttachmentPUT` | Yes |
+
+### Delete Attachment Command
+
+Delete an attachment in a certain document.
+
+Format within the `Commands` array in the [request body](../../../client-api/rest-api/document-commands/batch-commands.mdx#request-format):
+
+
+
+{`\{
+ \\"Id\\": \\"\\",
+ \\"Name\\": \\"\\",
+ \\"ChangeVector\\": \\"\\",
+ \\"Type\\": \\"AttachmentDELETE\\"
+\}
+`}
+
+
+
+| Parameter | Description | Required |
+| - | - | - |
+| **Id** | ID of document for which to delete the attachment | Yes |
+| Name | Name of the attachment to delete | Yes |
+| **ChangeVector** | The document's expected [change vector](../../../server/clustering/replication/change-vector.mdx). If it does not match the server-side change vector a concurrency exception is thrown. | No |
+| **Type** | Set to `AttachmentDELETE` | Yes |
+
+
+
+## Response Format
+
+### Http Status Codes
+
+| Code | Description |
+| - | - |
+| `201` | The transaction was successfully completed. |
+| `408` | The time specified by the options `waitForIndexThrow` or `waitForReplicasTimeout` passed before indexing or replication completed respectively, and an exception is thrown. This only happens if `throwOnTimeoutInWaitForReplicas` or `waitForIndexThrow` are set to `true`. |
+| `409` | A specified change vector did not match the server-side change vector, or a change vector was specified for a document that does not exist. A concurrency exception is thrown. |
+| `500` | Invalid request, such as a put attachment command for a document that does not exist. |
+
+### Response Body
+
+Results appear in the same order as the commands in the request body.
+
+
+
+{`\{
+ "Results":[
+ \{ \},
+ ...
+ ]
+\}
+`}
+
+
+
+* Format within the `Results` array in the response body:
+ * [Put Document Command](../../../client-api/rest-api/document-commands/batch-commands.mdx#put-document-command-1)
+ * [Patch Document Command](../../../client-api/rest-api/document-commands/batch-commands.mdx#patch-document-command-1)
+ * [Delete Document Command](../../../client-api/rest-api/document-commands/batch-commands.mdx#delete-document-command-1)
+ * [Delete by Prefix Command](../../../client-api/rest-api/document-commands/batch-commands.mdx#delete-by-prefix-command-1)
+ * [Put Attachment Command](../../../client-api/rest-api/document-commands/batch-commands.mdx#put-attachment-command-1)
+ * [Delete Attachment Command](../../../client-api/rest-api/document-commands/batch-commands.mdx#delete-attachment-command-1)
+
+### Put Document Command
+
+
+
+{`\{
+ "Type": "PUT",
+ "@id": "",
+ "@collection": "",
+ "@change-vector": "",
+ "@last-modified": ""
+\}
+`}
+
+
+
+| Parameter | Description |
+| - | - |
+| **Type** | Same as the `Type` of the command sent - in this case `PUT`. |
+| **@id** | The ID of the document that has been created or modified. |
+| **@collection** | Name of the [collection](../../../client-api/faq/what-is-a-collection.mdx) that contains the document. If none was specified, the collection will be `@empty`. |
+| **@change-vector** | The document's change vector after the command was executed. |
+
+### Patch Document Command
+
+
+
+{`\{
+ "@id": "",
+ "@change-vector": "",
+ "@last-modified": "",
+ "Type": "PATCH",
+ "PatchStatus": "",
+ "Debug": null
+\}
+`}
+
+
+
+| Parameter | Description |
+| - | - |
+| **@id** | The ID of the document that has been patched or created. |
+| **@change-vector** | The document's change vector after the command was executed. Returns `null` if the command did not result in any changes. |
+| **@last-modified** | Date and time (UTC) of the most recent modification made to the document. |
+| **Type** | Same as the `Type` of the command sent - in this case `PATCH`. |
+| **PatchStatus** | See [below](../../../client-api/rest-api/document-commands/batch-commands.mdx#patchstatus) |
+| **Debug** | Should always return `null` in the context of batch commands. |
+
+#### PatchStatus
+
+| Status | Description |
+| - | - |
+| **DocumentDoesNotExist** | No document with the specified ID exists. This will only be returned if no `PatchIfMissing` script was given. |
+| **Created** | No document with the specified ID existed, so a new document was created with that ID and `PatchIfMissing` was applied. |
+| **Patched** | The specified document was successfully patched. |
+| **Skipped** | Should not appear in the context of batch commands. |
+| **NotModified** | Patch was successful but did not result in a modification to the document. |
+
+### Delete Document Command
+
+
+
+{`\{
+ "Id": "",
+ "Type": "DELETE",
+ "Deleted":
+\}
+`}
+
+
+
+| Parameter | Description |
+| - | - |
+| **Id** | The ID of the document that has been deleted. |
+| **Type** | Same as the `Type` of the command sent - in this case `DELETE`. |
+| **Deleted** | `true` if the document was successfully deleted, `false` if not (for instance, because the specified document did not exist). |
+
+### Delete by Prefix Command
+
+
+
+{`\{
+ "Id": "",
+ "Type": "DELETE",
+ "Deleted":
+\}
+`}
+
+
+
+| Parameter | Description |
+| - | - |
+| **Id** | The document ID prefix of the documents that were deleted. |
+| **Type** | Same as the `Type` of the command sent - in this case `DELETE`. |
+| **Deleted** | `true` if the documents were successfully deleted, `false` if not (for instance, because no documents with the specified prefix exist). |
+
+### Put Attachment Command
+
+
+
+{`\{
+ "Id": "",
+ "Type": "AttachmentPUT",
+ "Name": "",
+ "ChangeVector": "",
+ "Hash": "",
+ "ContentType": "",
+ "Size": ,
+ "DocumentChangeVector": ""
+\}
+`}
+
+
+
+| Parameter | Description |
+| - | - |
+| **Id** | The ID of the document for which the attachment was put. |
+| **Type** | Same as the `Type` of the command sent - in this case `AttachmentPUT`. |
+| **Name** | Name of the attachment that was created or updated. |
+| **ChangeVector** | A change vector specific to the _attachment_, distinct from the usual document change vector. Use this change vector in requests to update this attachment. |
+| **Hash** | Hash representing the attachment. |
+| **ContentType** | MIME type of the attachment. |
+| **Size** | Size of the attachment in bytes. |
+| **DocumentChangeVector** | The document's change vector after the command was executed. |
+
+### Delete Attachment Command
+
+
+
+{`\{
+ "Type": "AttachmentDELETE",
+ "@id": "",
+ "Name": ""
+\}
+`}
+
+
+
+| Parameter | Description |
+| - | - |
+| **Type** | Same as the `Type` of the command sent - in this case `AttachmentDELETE`. |
+| **@id** | The ID of the document for which the attachment was deleted. |
+| **Name** | Name of the attachment that was deleted. |
+| **DocumentChangeVector** | The document's change vector after the command was executed. |
+
+
+
+## More Examples
+
+[About Northwind](../../../start/about-examples.mdx), the database used in our examples.
+
+* In this section:
+ * [Put Document Command](../../../client-api/rest-api/document-commands/batch-commands.mdx#put-document-command-2)
+ * [Patch Document Command](../../../client-api/rest-api/document-commands/batch-commands.mdx#patch-document-command-2)
+ * [Delete Document Command](../../../client-api/rest-api/document-commands/batch-commands.mdx#delete-document-command-2)
+ * [Delete by Prefix Command](../../../client-api/rest-api/document-commands/batch-commands.mdx#delete-by-prefix-command-2)
+ * [Put Attachment Command](../../../client-api/rest-api/document-commands/batch-commands.mdx#put-attachment-command-2)
+ * [Delete Attachment Command](../../../client-api/rest-api/document-commands/batch-commands.mdx#delete-attachment-command-2)
+### Put Document Command
+
+Request:
+
+
+
+{`curl -X POST "http://live-test.ravendb.net/databases/Example/bulk_docs"
+-H "Content-Type: application/json"
+-d "\{
+ \\"Commands\\": [
+ \{
+ \\"Id\\": \\"person/1\\",
+ \\"ChangeVector\\": null,
+ \\"Document\\": \{
+ \\"Name\\": \\"John Smith\\"
+ \},
+ \\"Type\\": \\"PUT\\"
+ \}
+ ]
+\}"
+`}
+
+
+
+Response:
+
+
+
+{`HTTP/1.1 201 Created
+Server:"nginx"
+Date:"Wed, 18 Sep 2019 16:14:20 GMT"
+Content-Type:"application/json; charset=utf-8"
+Transfer-Encoding:"chunked"
+Connection:"keep-alive"
+Content-Encoding:"gzip"
+Vary:"Accept-Encoding"
+Raven-Server-Version:"4.2.4.42"
+
+\{
+ "Results": [
+ \{
+ "Type": "PUT",
+ "@id": "person/1",
+ "@collection": "@empty",
+ "@change-vector": "A:5951-pITDlhlRaEeJh16dDBREzg, A:1887-0N64iiIdYUKcO+yq1V0cPA, A:6214-xwmnvG1KBkSNXfl7/0yJ1A",
+ "@last-modified": "2019-09-18T16:14:20.5759532"
+ \}
+ ]
+\}
+`}
+
+
+### Patch Document Command
+
+Request:
+
+
+
+{`curl -X POST "http://live-test.ravendb.net/databases/Example/bulk_docs"
+-H "Content-Type: application/json"
+-d "\{
+ \\"Commands\\": [
+ \{
+ \\"Id\\": \\"person/1\\",
+ \\"ChangeVector\\": null,
+ \\"Patch\\": \{
+ \\"Script\\": \\"this.Name = 'Jane Doe';\\",
+ \\"Values\\": \{\}
+ \},
+ \\"Type\\": \\"PATCH\\"
+ \}
+ ]
+\}"
+`}
+
+
+
+Response:
+
+
+
+{`HTTP/1.1 201 Created
+Server:"nginx"
+Date:"Wed, 18 Sep 2019 16:18:13 GMT"
+Content-Type:"application/json; charset=utf-8"
+Transfer-Encoding:"chunked"
+Connection:"keep-alive"
+Content-Encoding:"gzip"
+Vary:"Accept-Encoding"
+Raven-Server-Version:"4.2.4.42"
+
+\{
+ "Results": [
+ \{
+ "Id": "person/1",
+ "ChangeVector": "A:5952-pITDlhlRaEeJh16dDBREzg, A:1887-0N64iiIdYUKcO+yq1V0cPA, A:6214-xwmnvG1KBkSNXfl7/0yJ1A",
+ "LastModified": "2019-09-18T16:18:13.5745560",
+ "Type": "PATCH",
+ "PatchStatus": "Patched",
+ "Debug": null
+ \}
+ ]
+\}
+`}
+
+
+### Delete Document Command
+
+Request:
+
+
+
+{`curl -X POST "http://live-test.ravendb.net/databases/Example/bulk_docs"
+-H "Content-Type: application/json"
+-d "\{
+ \\"Commands\\": [
+ \{
+ \\"Id\\": \\"employees/1-A\\",
+ \\"ChangeVector\\": null,
+ \\"Type\\": \\"DELETE\\"
+ \}
+ ]
+\}"
+`}
+
+
+
+Response:
+
+
+
+{`HTTP/1.1 201 Created
+Server:"nginx"
+Date:"Wed, 18 Sep 2019 16:30:15 GMT"
+Content-Type:"application/json; charset=utf-8"
+Transfer-Encoding:"chunked"
+Connection:"keep-alive"
+Content-Encoding:"gzip"
+Vary:"Accept-Encoding"
+Raven-Server-Version:"4.2.4.42"
+
+\{
+ "Results": [
+ \{
+ "Id": "employees/1-A",
+ "Type": "DELETE",
+ "Deleted": true,
+ "ChangeVector": null
+ \}
+ ]
+\}
+`}
+
+
+### Delete by Prefix Command
+
+Request:
+
+
+
+{`curl -X POST "http://live-test.ravendb.net/databases/Example/bulk_docs"
+-H "Content-Type: application/json"
+-d "\{
+ \\"Commands\\": [
+ \{
+ \\"Id\\": \\"employ\\",
+ \\"ChangeVector\\": null,
+ \\"IdPrefixed\\": true,
+ \\"Type\\": \\"DELETE\\"
+ \}
+ ]
+\}"
+`}
+
+
+
+Response:
+
+
+
+{`HTTP/1.1 201 Created
+Server:"nginx"
+Date:"Wed, 18 Sep 2019 16:32:16 GMT"
+Content-Type:"application/json; charset=utf-8"
+Transfer-Encoding:"chunked"
+Connection:"keep-alive"
+Content-Encoding:"gzip"
+Vary:"Accept-Encoding"
+Raven-Server-Version:"4.2.4.42"
+
+\{
+ "Results": [
+ \{
+ "Id": "employ",
+ "Type": "DELETE",
+ "Deleted": true
+ \}
+ ]
+\}
+`}
+
+
+### Put Attachment Command
+
+Request:
+
+
+
+{`curl -X POST "http://live-test.ravendb.net/databases/Example/bulk_docs"
+-H "Content-Type: multipart/mixed; boundary=some_boundary"
+-d "
+--some_boundary
+\{
+ \\"Commands\\": [
+ \{
+ \\"Id\\":\\"shippers/1-A\\",
+ \\"Name\\":\\"some_file\\",
+ \\"ContentType\\":\\"text\\"
+ \\"Type\\":\\"AttachmentPUT\\",
+ \}
+ ]
+\}
+--some_boundary
+Command-Type: AttachmentStream
+
+12345
+--some_boundary--"
+`}
+
+
+
+Response:
+
+
+
+{`HTTP/1.1 201 Created
+Server:"nginx"
+Date:"Wed, 18 Sep 2019 16:40:43 GMT"
+Content-Type:"application/json; charset=utf-8"
+Transfer-Encoding:"chunked"
+Connection:"keep-alive"
+Content-Encoding:"gzip"
+Vary:"Accept-Encoding"
+Raven-Server-Version:"4.2.4.42"
+
+\{
+ "Results": [
+ \{
+ "Id": "shippers/1-A",
+ "Type": "AttachmentPUT",
+ "Name": "some_file",
+ "ChangeVector": "A:5973-pITDlhlRaEeJh16dDBREzg, A:1887-0N64iiIdYUKcO+yq1V0cPA, A:6214-xwmnvG1KBkSNXfl7/0yJ1A",
+ "Hash": "DHnN2gtPymAUoaFxtgjxfU83O8fxGHw8+H/P+kkPxjg=",
+ "ContentType": "text",
+ "Size": 5,
+ "DocumentChangeVector": "A:5974-pITDlhlRaEeJh16dDBREzg, A:1887-0N64iiIdYUKcO+yq1V0cPA, A:6214-xwmnvG1KBkSNXfl7/0yJ1A"
+ \}
+ ]
+\}
+`}
+
+
+### Delete Attachment Command
+
+Request:
+
+
+
+{`curl -X POST "http://live-test.ravendb.net/databases/Example/bulk_docs"
+-H "Content-Type: application/json"
+-d "\{
+ \\"Commands\\": [
+ \{
+ \\"Id\\": \\"categories/2-A\\",
+ \\"Name\\": \\"image.jpg\\",
+ \\"ChangeVector\\": null,
+ \\"Type\\": \\"AttachmentDELETE\\"
+ \}
+ ]
+\}"
+`}
+
+
+
+Response:
+
+
+
+{`HTTP/1.1 201 Created
+Server:"nginx"
+Date:"Wed, 18 Sep 2019 16:44:40 GMT"
+Content-Type:"application/json; charset=utf-8"
+Transfer-Encoding:"chunked"
+Connection:"keep-alive"
+Content-Encoding:"gzip"
+Vary:"Accept-Encoding"
+Raven-Server-Version:"4.2.4.42"
+
+\{
+ "Results": [
+ \{
+ "Type": "AttachmentDELETE",
+ "@id": "categories/2-A",
+ "Name": "image.jpg",
+ "DocumentChangeVector": "A:5979-pITDlhlRaEeJh16dDBREzg, A:1887-0N64iiIdYUKcO+yq1V0cPA, A:6214-xwmnvG1KBkSNXfl7/0yJ1A"
+ \}
+ ]
+\}
+`}
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/rest-api/document-commands/delete-document.mdx b/versioned_docs/version-7.1/client-api/rest-api/document-commands/delete-document.mdx
new file mode 100644
index 0000000000..f12f2a4f08
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/rest-api/document-commands/delete-document.mdx
@@ -0,0 +1,90 @@
+---
+title: "Delete a Document"
+hide_table_of_contents: true
+sidebar_label: Delete a Document
+sidebar_position: 4
+---
+
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+# Delete a Document
+
+
+* Use this endpoint with the `**DELETE**` method to delete one document from the database:
+`/databases//docs?id=`
+
+* In this page:
+ * [Example](../../../client-api/rest-api/document-commands/delete-document.mdx#example)
+ * [Request Format](../../../client-api/rest-api/document-commands/delete-document.mdx#request-format)
+ * [Response Format](../../../client-api/rest-api/document-commands/delete-document.mdx#response-format)
+
+
+## Example
+
+This is a cURL request to delete the document "employees/1-A" from a database named "Example" on our
+[playground server](http://live-test.ravendb.net):
+
+
+
+{`curl -X DELETE "http://live-test.ravendb.net/databases/Example/docs?id=employees/1-A"
+`}
+
+
+
+Response:
+
+
+
+{`HTTP/1.1 204
+status: 204
+Server: nginx
+Date: Tue, 27 Aug 2019 11:40:12 GMT
+Connection: keep-alive
+Raven-Server-Version: 4.2.3.42
+`}
+
+
+
+
+
+## Request Format
+
+This is the general format of the cURL request:
+
+
+
+{`curl -X DELETE "/databases//docs?id="
+--header "If-Match: "
+`}
+
+
+
+| Query Parameters | Description | Required |
+| - | - | - |
+| **id** | ID of a document to be deleted. | Yes |
+
+| Headers | Description | Required |
+| - | - | - |
+| **If-Match** | Expected [change vector](../../../server/clustering/replication/change-vector.mdx). If it matches the server-side change vector the document is deleted, if they don't match a concurrency exception is thrown. | No |
+
+
+
+## Response Format
+
+| Header | Description |
+| - | - |
+| **Content-Type** | MIME media type and character encoding. This should always be: `application/json; charset=utf-8`. |
+| **Raven-Server-Version** | Version or RavenDB the responding server is running |
+
+| HTTP Status Code | Description |
+| - | - |
+| `204` | The document was successfully deleted, _or_ no document with the specified ID exists. |
+| `409` | The change vector submitted did not match the server-side change vector. A concurrency exception is thrown. |
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/rest-api/document-commands/get-all-documents.mdx b/versioned_docs/version-7.1/client-api/rest-api/document-commands/get-all-documents.mdx
new file mode 100644
index 0000000000..7462edb72c
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/rest-api/document-commands/get-all-documents.mdx
@@ -0,0 +1,431 @@
+---
+title: "Get All Documents"
+hide_table_of_contents: true
+sidebar_label: Get All Documents
+sidebar_position: 0
+---
+
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+# Get All Documents
+
+
+* Use this endpoint with the **`GET`** method to retrieve all documents from the database:
+`/databases//docs`
+
+* Query parameters can be used to page the results.
+
+* In this page:
+ * [Basic Example](../../../client-api/rest-api/document-commands/get-all-documents.mdx#basic-example)
+ * [Request Format](../../../client-api/rest-api/document-commands/get-all-documents.mdx#request-format)
+ * [Response Format](../../../client-api/rest-api/document-commands/get-all-documents.mdx#response-format)
+ * [Query Parameter Examples](../../../client-api/rest-api/document-commands/get-all-documents.mdx#query-parameter-examples)
+ * [start](../../../client-api/rest-api/document-commands/get-all-documents.mdx#start)
+ * [pageSize](../../../client-api/rest-api/document-commands/get-all-documents.mdx#pagesize)
+ * [metadataOnly](../../../client-api/rest-api/document-commands/get-all-documents.mdx#metadataonly)
+
+## Basic Example
+
+This is a cURL request to a database named "Example" on our [playground server](http://live-test.ravendb.net). Paging
+through all of the documents in the database, the request skips the first 9 documents and retrieves the next 2.
+
+
+
+{`curl -X GET "http://live-test.ravendb.net/databases/Example/docs?start=9&pageSize=2"
+`}
+
+
+
+Response:
+
+
+
+{`HTTP/1.1 200 OK
+Server: nginx
+Date: Thu, 10 Oct 2019 12:00:40 GMT
+Content-Type: application/json; charset=utf-8
+Transfer-Encoding: chunked
+Connection: keep-alive
+Content-Encoding: gzip
+ETag: "A:2134-W33iO0zJC0qZKWh6fjnp6A, A:1887-0N64iiIdYUKcO+yq1V0cPA, A:6214-xwmnvG1KBkSNXfl7/0yJ1A"
+Vary: Accept-Encoding
+Raven-Server-Version: 4.2.4.42
+
+\{
+ "Results": [
+ \{
+ "Name": "Seafood",
+ "Description": "Seaweed and fish",
+ "@metadata": \{
+ "@attachments": [
+ \{
+ "Name": "image.jpg",
+ "Hash": "GWdpGVCWyLsrtNdA5AOee0QOZFG6rKIqCosZZN5WnCA=",
+ "ContentType": "image/jpeg",
+ "Size": 33396
+ \}
+ ],
+ "@collection": "Categories",
+ "@change-vector": "A:2107-W33iO0zJC0qZKWh6fjnp6A",
+ "@flags": "HasAttachments",
+ "@id": "categories/8-A",
+ "@last-modified": "2018-07-27T12:21:39.1315788Z"
+ \}
+ \},
+ \{
+ "Name": "Produce",
+ "Description": "Dried fruit and bean curd",
+ "@metadata": \{
+ "@attachments": [
+ \{
+ "Name": "image.jpg",
+ "Hash": "asY7yUHhdgaVoKhivgua0OUSJKXqNDa3Z1uLP9XAocM=",
+ "ContentType": "image/jpeg",
+ "Size": 61749
+ \}
+ ],
+ "@collection": "Categories",
+ "@change-vector": "A:2104-W33iO0zJC0qZKWh6fjnp6A",
+ "@flags": "HasAttachments",
+ "@id": "categories/7-A",
+ "@last-modified": "2018-07-27T12:21:11.2283909Z"
+ \}
+ \}
+ ]
+\}
+`}
+
+
+
+
+
+## Request Format
+
+This is the general format of a cURL request that uses all query string parameters:
+
+
+
+{`curl -X GET "/databases//docs?
+ &start=
+ &pageSize=
+ &metadataOnly="
+--header "If-None-Match: "
+`}
+
+
+Linebreaks are added for clarity.
+
+
+#### Query String Parameters
+
+| Parameter | Description | Required |
+| - | - | - |
+| **start** | Number of results to skip. | No |
+| **pageSize** | Maximum number of results to retrieve. | No |
+| **metadataOnly** | Set this parameter to `true` to retrieve only the document metadata from each result. | No |
+
+#### Headers
+
+| Header | Description | Required |
+| - | - | - |
+| **If-None-Match** | This header takes a hash representing the previous results of an **identical** request. The hash is found in the response header `ETag`. If the results were not modified since the previous request, the server responds with http status code `304`, and the requested documents are retrieved from a local cache rather than over the network. | No |
+
+
+
+## Response Format
+
+#### Http Status Codes
+
+| Code | Description |
+| ----------- | - |
+| `200` | Results were successfully retrieved |
+| `304` | In response to an `If-None-Match` check: none of the requested documents were modified since they were last loaded, so they were not retrieved from the server. |
+
+#### Headers
+
+| Header | Description |
+| - | - |
+| **Content-Type** | MIME media type and character encoding. This should always be: `application/json; charset=utf-8` |
+| **ETag** | Hash representing the state of these results. If another, **identical** request is made, this hash can be sent in the `If-None-Match` header to check whether the retrieved documents have been modified since the last response. |
+| **Raven-Server-Version** | Version of RavenDB that the responding server is running |
+
+#### Body
+
+Retrieved documents are sorted in descending order of their [change vectors](../../../server/clustering/replication/change-vector.mdx).
+A retrieved document is identical in contents and format to the document stored in the server - unless the `metadataOnly`
+parameter is set to `true`.
+
+This is the general format of the JSON response body:
+
+
+
+{`\{
+ "Results": [
+ \{
+ "":"",
+ ...
+ "@metadata":\{
+ ...
+ \}
+ \},
+ \{ \},
+ ...
+ ]
+\}
+`}
+
+
+Linebreaks are added for clarity.
+
+
+
+## Query Parameter Examples
+
+[About Northwind](../../../start/about-examples.mdx), the database used in our examples.
+
+In this section:
+
+* [start](../../../client-api/rest-api/document-commands/get-all-documents.mdx#start)
+* [pageSize](../../../client-api/rest-api/document-commands/get-all-documents.mdx#pagesize)
+* [metadataOnly](../../../client-api/rest-api/document-commands/get-all-documents.mdx#metadataonly)
+### start
+
+Skip first 1,057 documents, and retrieve the rest (our version of Northwind contains 1,059 documents).
+cURL request:
+
+
+
+{`curl -X GET "http://live-test.ravendb.net/databases/Example/docs?start=1057"
+`}
+
+
+
+Response:
+
+
+
+{`HTTP/1.1 200 OK
+Server: nginx
+Date: Thu, 10 Oct 2019 16:30:37 GMT
+Content-Type: application/json; charset=utf-8
+Transfer-Encoding: chunked
+Connection: keep-alive
+Content-Encoding: gzip
+ETag: "A:2134-W33iO0zJC0qZKWh6fjnp6A, A:1887-0N64iiIdYUKcO+yq1V0cPA, A:6214-xwmnvG1KBkSNXfl7/0yJ1A"
+Vary: Accept-Encoding
+Raven-Server-Version: 4.2.4.42
+
+\{
+ "Results": [
+ \{
+ "ExternalId": "ALFKI",
+ "Name": "Alfreds Futterkiste",
+ "Contact": \{
+ "Name": "Maria Anders",
+ "Title": "Sales Representative"
+ \},
+ "Address": \{
+ "Line1": "Obere Str. 57",
+ "Line2": null,
+ "City": "Berlin",
+ "Region": null,
+ "PostalCode": "12209",
+ "Country": "Germany",
+ "Location": \{
+ "Latitude": 53.24939,
+ "Longitude": 14.43286
+ \}
+ \},
+ "Phone": "030-0074321",
+ "Fax": "030-0076545",
+ "@metadata": \{
+ "@collection": "Companies",
+ "@change-vector": "A:3-W33iO0zJC0qZKWh6fjnp6A",
+ "@id": "companies/1-A",
+ "@last-modified": "2018-07-27T12:11:53.0182893Z"
+ \}
+ \},
+ \{
+ "Max": 8,
+ "@metadata": \{
+ "@collection": "@hilo",
+ "@change-vector": "A:1-W33iO0zJC0qZKWh6fjnp6A",
+ "@id": "Raven/Hilo/categories",
+ "@last-modified": "2018-07-27T12:11:53.0145929Z"
+ \}
+ \}
+ ]
+\}
+`}
+
+
+### pageSize
+
+Retrieve the first document.
+cURL request:
+
+
+
+{`curl -X GET "http://live-test.ravendb.net/databases/Example/docs?pageSize=1"
+`}
+
+
+
+Response:
+
+
+
+{`HTTP/1.1 200 OK
+Server: nginx
+Date: Thu, 10 Oct 2019 16:33:31 GMT
+Content-Type: application/json; charset=utf-8
+Transfer-Encoding: chunked
+Connection: keep-alive
+Content-Encoding: gzip
+ETag: "A:2134-W33iO0zJC0qZKWh6fjnp6A, A:1887-0N64iiIdYUKcO+yq1V0cPA, A:6214-xwmnvG1KBkSNXfl7/0yJ1A"
+Vary: Accept-Encoding
+Raven-Server-Version: 4.2.4.42
+
+\{
+ "Results": [
+ \{
+ "LastName": "Callahan",
+ "FirstName": "Laura",
+ "Title": "Inside Sales Coordinator",
+ "Address": \{
+ "Line1": "4726 - 11th Ave. N.E.",
+ "Line2": null,
+ "City": "Seattle",
+ "Region": "WA",
+ "PostalCode": "98105",
+ "Country": "USA",
+ "Location": \{
+ "Latitude": 47.664164199999988,
+ "Longitude": -122.3160148
+ \}
+ \},
+ "HiredAt": "1994-03-05T00:00:00.0000000",
+ "Birthday": "1958-01-09T00:00:00.0000000",
+ "HomePhone": "(206) 555-1189",
+ "Extension": "2344",
+ "ReportsTo": "employees/2-A",
+ "Notes": [
+ "Laura received a BA in psychology from the University of Washington. She has also completed a course in business French. She reads and writes French."
+ ],
+ "Territories": [
+ "19428",
+ "44122",
+ "45839",
+ "53404"
+ ],
+ "@metadata": \{
+ "@attachments": [
+ \{
+ "Name": "photo.jpg",
+ "Hash": "8dte+O8Ds9RJx8dKruWurqapAojM/ZxjHBMst9wm5sI=",
+ "ContentType": "image/jpeg",
+ "Size": 14446
+ \}
+ ],
+ "@collection": "Employees",
+ "@change-vector": "A:2134-W33iO0zJC0qZKWh6fjnp6A",
+ "@flags": "HasAttachments",
+ "@id": "employees/8-A",
+ "@last-modified": "2018-07-27T12:26:25.0179915Z"
+ \}
+ \}
+ ]
+\}
+`}
+
+
+### metadataOnly
+
+Skip first 123 documents, take the next 5, and retrieve only the metadata of each document.
+cURL request:
+
+
+
+{`curl -X GET "http://live-test.ravendb.net/databases/Example/docs?
+ start=123
+ &pageSize=5
+ &metadataOnly=true"
+`}
+
+
+Linebreaks are added for clarity.
+
+Response:
+
+
+
+{`HTTP/1.1 200 OK
+Server: nginx
+Date: Thu, 10 Oct 2019 16:50:00 GMT
+Content-Type: application/json; charset=utf-8
+Connection: keep-alive
+ETag: "A:2134-W33iO0zJC0qZKWh6fjnp6A, A:1887-0N64iiIdYUKcO+yq1V0cPA, A:6214-xwmnvG1KBkSNXfl7/0yJ1A"
+Vary: Accept-Encoding
+Raven-Server-Version: 4.2.4.42
+Content-Length: 918
+
+\{
+ "Results": [
+ \{
+ "@metadata": \{
+ "@collection": "Orders",
+ "@change-vector": "A:1871-W33iO0zJC0qZKWh6fjnp6A",
+ "@flags": "HasRevisions",
+ "@id": "orders/728-A",
+ "@last-modified": "2018-07-27T12:11:53.1753957Z"
+ \}
+ \},
+ \{
+ "@metadata": \{
+ "@collection": "Orders",
+ "@change-vector": "A:1869-W33iO0zJC0qZKWh6fjnp6A",
+ "@flags": "HasRevisions",
+ "@id": "orders/727-A",
+ "@last-modified": "2018-07-27T12:11:53.1751418Z"
+ \}
+ \},
+ \{
+ "@metadata": \{
+ "@collection": "Orders",
+ "@change-vector": "A:1867-W33iO0zJC0qZKWh6fjnp6A",
+ "@flags": "HasRevisions",
+ "@id": "orders/726-A",
+ "@last-modified": "2018-07-27T12:11:53.1749721Z"
+ \}
+ \},
+ \{
+ "@metadata": \{
+ "@collection": "Orders",
+ "@change-vector": "A:1865-W33iO0zJC0qZKWh6fjnp6A",
+ "@flags": "HasRevisions",
+ "@id": "orders/725-A",
+ "@last-modified": "2018-07-27T12:11:53.1747646Z"
+ \}
+ \},
+ \{
+ "@metadata": \{
+ "@collection": "Orders",
+ "@change-vector": "A:1863-W33iO0zJC0qZKWh6fjnp6A",
+ "@flags": "HasRevisions",
+ "@id": "orders/724-A",
+ "@last-modified": "2018-07-27T12:11:53.1745710Z"
+ \}
+ \}
+ ]
+\}
+`}
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/rest-api/document-commands/get-documents-by-id.mdx b/versioned_docs/version-7.1/client-api/rest-api/document-commands/get-documents-by-id.mdx
new file mode 100644
index 0000000000..d0a1f45b3f
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/rest-api/document-commands/get-documents-by-id.mdx
@@ -0,0 +1,471 @@
+---
+title: "Get Documents by ID"
+hide_table_of_contents: true
+sidebar_label: Get Documents by ID
+sidebar_position: 1
+---
+
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+# Get Documents by ID
+
+
+* Use this endpoint with the **`GET`** method to retrieve documents from the database according to their document IDs:
+`/databases//docs?id=`
+
+* Query parameters can be used to include [related documents](../../../client-api/how-to/handle-document-relationships.mdx#includes) and
+[counters](../../../document-extensions/counters/overview.mdx).
+
+* In this page:
+ * [Basic Example](../../../client-api/rest-api/document-commands/get-documents-by-id.mdx#basic-example)
+ * [Request Format](../../../client-api/rest-api/document-commands/get-documents-by-id.mdx#request-format)
+ * [Response Format](../../../client-api/rest-api/document-commands/get-documents-by-id.mdx#response-format)
+ * [More Examples](../../../client-api/rest-api/document-commands/get-documents-by-id.mdx#more-examples)
+ * [Get Multiple Documents](../../../client-api/rest-api/document-commands/get-documents-by-id.mdx#get-multiple-documents)
+ * [Get Related Documents](../../../client-api/rest-api/document-commands/get-documents-by-id.mdx#get-related-documents)
+ * [Get Document Metadata Only](../../../client-api/rest-api/document-commands/get-documents-by-id.mdx#get-document-metadata-only)
+ * [Get Document Counters](../../../client-api/rest-api/document-commands/get-documents-by-id.mdx#get-document-counters)
+
+## Basic Example
+
+This is a cURL request to retrieve one document named "products/48-A" from a database named "Example" on our
+[playground server](http://live-test.ravendb.net):
+
+
+
+{`curl -X GET "http://live-test.ravendb.net/databases/Example/docs?id=products/48-A"
+`}
+
+
+
+Response:
+
+
+
+{`HTTP/1.1 200 OK
+Server: nginx
+Date: Tue, 10 Sep 2019 10:33:04 GMT
+Content-Type: application/json; charset=utf-8
+Transfer-Encoding: chunked
+Connection: keep-alive
+Content-Encoding: gzip
+ETag: "A:285-k50KTOC5G0mfVXKjomTNFQ"
+Vary: Accept-Encoding
+Raven-Server-Version: 4.2.4.42
+
+\{
+ "Results": [
+ \{
+ "Name": "Chocolade",
+ "Supplier": "suppliers/22-A",
+ "Category": "categories/3-A",
+ "QuantityPerUnit": "10 pkgs.",
+ "PricePerUnit": 12.7500,
+ "UnitsInStock": 22,
+ "UnitsOnOrder": 15,
+ "Discontinued": false,
+ "ReorderLevel": 25,
+ "@metadata": \{
+ "@collection": "Products",
+ "@change-vector": "A:285-k50KTOC5G0mfVXKjomTNFQ",
+ "@id": "products/48-A",
+ "@last-modified": "2018-07-27T12:11:53.0300420Z"
+ \}
+ \}
+ ],
+ "Includes": \{\}
+\}
+`}
+
+
+
+
+
+## Request Format
+
+This is the general format of a cURL request that uses all parameters:
+
+
+
+{`curl -X GET "/databases//docs?
+ id=
+ &include=
+ &counter=
+ &metadataOnly="
+--header "If-None-Match:"
+`}
+
+
+Linebreaks are added for clarity.
+
+
+#### Query String Parameters
+
+| Parameter | Description | Required / # |
+|------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------|
+| **id** | ID of a document to retrieve. If no IDs are specified, all the documents in the database are retrieved in descending order of their [change vectors](../../../server/clustering/replication/change-vector.mdx). | Yes; Can be used more than once |
+| **include** | Path to a field containing the ID of another, 'related' document. [See: How to Handle Document Relationships](../../../client-api/how-to/handle-document-relationships.mdx#includes). | No; Can be used more than once |
+| **counter** | Name of a [counter](../../../document-extensions/counters/overview.mdx) to retrieve. Set this parameter to `@all_counters` to retrieve all counters of the specified documents. Counters of _included_ documents, however, will not be retrieved. | No; Can be used more than once |
+| **metadataOnly** | Set this parameter to `true` to retrieve only the metadata of each document. This does not apply to included documents which are retrieved with their complete contents. | No; Used once |
+
+#### Headers
+
+| Header | Description | Required |
+| - | - | - |
+| **If-None-Match** | This header takes a hash representing the previous results of an **identical** request. The hash is found in the response header `ETag`. If the results were not modified since the previous request, the server responds with http status code `304` and the requested documents are retrieved from a local cache rather than over the network. | No |
+
+## Response Format
+
+#### Http Status Codes
+
+| Code | Description |
+| - | - |
+| `200` | Results are successfully retrieved. If a requested document could not be found, the result returned is `null`. |
+| `304` | In response to an `If-None-Match` check: none of the requested documents were modified since they were last loaded, so they were not retrieved from the server. |
+| `404` | No document with the specified ID was found. This code is only sent when _one_ document was requested. Otherwise, see status code `200`. |
+
+#### Headers
+
+| Header | Description |
+| - | - |
+| **Content-Type** | MIME media type and character encoding. This should always be: `application/json; charset=utf-8`. |
+| **ETag** | Hash representing the state of these results. If another, **identical** request is made, this hash can be sent in the `If-None-Match` header to check whether the retrieved documents have been modified since the last response. If none were modified, they are not retrieved. |
+| **Raven-Server-Version** | Version of RavenDB that the responding server is running. |
+
+#### Body
+
+A retrieved document is identical in contents and format to the document stored on the server (unless the `metadataOnly`
+parameter is set to `true`).
+
+This is the general JSON format of the response body:
+
+
+
+{`\{
+ "Results": [
+ \{
+
+ \},
+ \{ \},
+ ...
+ ],
+ "Includes":
+ "": \{
+
+ \},
+ "": \{ \},
+ ...
+ \}
+ "CounterIncludes": \{
+ "": [
+ \{
+ "DocumentId": "",
+ "CounterName": "",
+ "TotalValue":
+ \},
+ \{ \},
+ ...
+ ],
+ "": [ ],
+ ...
+ \}
+\}
+`}
+
+
+Linebreaks are added for clarity.
+
+
+
+## More Examples
+
+[About Northwind](../../../start/about-examples.mdx), the database used in our examples.
+
+In this section:
+
+* [Get Multiple Documents](../../../client-api/rest-api/document-commands/get-documents-by-id.mdx#get-multiple-documents)
+* [Get Related Documents](../../../client-api/rest-api/document-commands/get-documents-by-id.mdx#get-related-documents)
+* [Get Document Metadata Only](../../../client-api/rest-api/document-commands/get-documents-by-id.mdx#get-document-metadata-only)
+* [Get Document Counters](../../../client-api/rest-api/document-commands/get-documents-by-id.mdx#get-document-counters)
+### Get Multiple Documents
+
+Example cURL request:
+
+
+
+{`curl -X GET "http://live-test.ravendb.net/databases/Example/docs?
+ id=shippers/1-A
+ &id=shippers/2-A"
+`}
+
+
+Linebreaks are added for clarity.
+
+Response:
+
+
+
+{`HTTP/1.1 200 OK
+Server: nginx
+Date: Thu, 12 Sep 2019 09:23:49 GMT
+Content-Type: application/json; charset=utf-8
+Transfer-Encoding: chunked
+Connection: keep-alive
+Content-Encoding: gzip
+ETag: "Hash-auWLG9xq3imTfRdJvlKIL32LhEM0IwJ20eiibWse0X8="
+Vary: Accept-Encoding
+Raven-Server-Version: 4.2.4.42
+
+\{
+ "Results": [
+ \{
+ "Name": "Speedy Express",
+ "Phone": "(503) 555-9831",
+ "@metadata": \{
+ "@collection": "Shippers",
+ "@change-vector": "A:349-k50KTOC5G0mfVXKjomTNFQ",
+ "@id": "shippers/1-A",
+ "@last-modified": "2018-07-27T12:11:53.0317375Z"
+ \}
+ \},
+ \{
+ "Name": "United Package",
+ "Phone": "(503) 555-3199",
+ "@metadata": \{
+ "@collection": "Shippers",
+ "@change-vector": "A:351-k50KTOC5G0mfVXKjomTNFQ",
+ "@id": "shippers/2-A",
+ "@last-modified": "2018-07-27T12:11:53.0317596Z"
+ \}
+ \}
+ ],
+ "Includes": \{\}
+\}
+`}
+
+
+### Get Related Documents
+
+Example cURL request:
+
+
+
+{`curl -X GET "http://live-test.ravendb.net/databases/Demo/docs?
+ id=products/48-A
+ &include=Supplier
+ &include=Category"
+`}
+
+
+Linebreaks are added for clarity.
+
+Response:
+
+
+
+{`HTTP/1.1 200 OK
+Server: nginx
+Date: Tue, 10 Sep 2019 10:40:27 GMT
+Content-Type: application/json; charset=utf-8
+Transfer-Encoding: chunked
+Connection: keep-alive
+Content-Encoding: gzip
+ETag: "Hash-9oK1ZcWmNa9SD9hP8m0vT355ztQuFnF/vKD5ILyI/KY="
+Vary: Accept-Encoding
+Raven-Server-Version: 4.2.4.42
+
+\{
+ "Results": [
+ \{
+ "Name": "Chocolade",
+ "Supplier": "suppliers/22-A",
+ "Category": "categories/3-A",
+ "QuantityPerUnit": "10 pkgs.",
+ "PricePerUnit": 12.7500,
+ "UnitsInStock": 22,
+ "UnitsOnOrder": 15,
+ "Discontinued": false,
+ "ReorderLevel": 25,
+ "@metadata": \{
+ "@collection": "Products",
+ "@change-vector": "A:285-k50KTOC5G0mfVXKjomTNFQ",
+ "@id": "products/48-A",
+ "@last-modified": "2018-07-27T12:11:53.0300420Z"
+ \}
+ \}
+ ],
+ "Includes": \{
+ "suppliers/22-A": \{
+ "Contact": \{
+ "Name": "Dirk Luchte",
+ "Title": "Accounting Manager"
+ \},
+ "Name": "Zaanse Snoepfabriek",
+ "Address": \{
+ "Line1": "Verkoop Rijnweg 22",
+ "Line2": null,
+ "City": "Zaandam",
+ "Region": null,
+ "PostalCode": "9999 ZZ",
+ "Country": "Netherlands",
+ "Location": null
+ \},
+ "Phone": "(12345) 1212",
+ "Fax": "(12345) 1210",
+ "HomePage": null,
+ "@metadata": \{
+ "@collection": "Suppliers",
+ "@change-vector": "A:399-k50KTOC5G0mfVXKjomTNFQ",
+ "@id": "suppliers/22-A",
+ "@last-modified": "2018-07-27T12:11:53.0335729Z"
+ \}
+ \},
+ "categories/3-A": \{
+ "Name": "Confections",
+ "Description": "Desserts, candies, and sweet breads",
+ "@metadata": \{
+ "@attachments": [
+ \{
+ "Name": "image.jpg",
+ "Hash": "1QxSMa3tBr+y8wQYNre7E9UJFFVTNWGjVoC+IC+gSSs=",
+ "ContentType": "image/jpeg",
+ "Size": 47955
+ \}
+ ],
+ "@collection": "Categories",
+ "@change-vector": "A:2092-k50KTOC5G0mfVXKjomTNFQ",
+ "@flags": "HasAttachments",
+ "@id": "categories/3-A",
+ "@last-modified": "2018-07-27T12:16:44.1738714Z"
+ \}
+ \}
+ \}
+\}
+`}
+
+
+### Get Document Metadata Only
+
+Example cURL request:
+
+
+
+{`curl -X GET "http://live-test.ravendb.net/databases/Example/docs?
+ id=orders/19-A
+ &metadataOnly=true"
+`}
+
+
+Linebreaks are added for clarity.
+
+Response:
+
+
+
+{`HTTP/1.1 200 OK
+Server: nginx
+Date: Tue, 10 Sep 2019 10:52:28 GMT
+Content-Type: application/json; charset=utf-8
+Transfer-Encoding: chunked
+Connection: keep-alive
+Content-Encoding: gzip
+ETag: "A:453-k50KTOC5G0mfVXKjomTNFQ"
+Vary: Accept-Encoding
+Raven-Server-Version: 4.2.4.42
+
+\{
+ "Results": [
+ \{
+ "@metadata": \{
+ "@collection": "Orders",
+ "@change-vector": "A:453-k50KTOC5G0mfVXKjomTNFQ",
+ "@flags": "HasRevisions",
+ "@id": "orders/19-A",
+ "@last-modified": "2018-07-27T12:11:53.0476121Z"
+ \}
+ \}
+ ],
+ "Includes": \{\}
+\}
+`}
+
+
+### Get Document Counters
+
+Example cURL request:
+
+
+
+{`curl -X GET "http://live-test.ravendb.net/databases/Example/docs?
+ id=products/48-A
+ &counter=MoLtUaE"
+`}
+
+
+Linebreaks are added for clarity.
+
+Response:
+
+
+
+{`HTTP/1.1 200 OK
+Server: nginx
+Date: Tue, 10 Sep 2019 12:26:04 GMT
+Content-Type: application/json; charset=utf-8
+Transfer-Encoding: chunked
+Connection: keep-alive
+Content-Encoding: gzip
+ETag: "A:5957-k50KTOC5G0mfVXKjomTNFQ"
+Vary: Accept-Encoding
+Raven-Server-Version: 4.2.4.42
+
+\{
+ "Results": [
+ \{
+ "Name": "Chocolade",
+ "Supplier": "suppliers/22-A",
+ "Category": "categories/3-A",
+ "QuantityPerUnit": "10 pkgs.",
+ "PricePerUnit": 12.7500,
+ "UnitsInStock": 22,
+ "UnitsOnOrder": 15,
+ "Discontinued": false,
+ "ReorderLevel": 25,
+ "@metadata": \{
+ "@collection": "Products",
+ "@counters": [
+ "#OfCounters",
+ "MoLtUaE"
+ ],
+ "@change-vector": "A:285-k50KTOC5G0mfVXKjomTNFQ",
+ "@id": "products/48-A",
+ "@flags": "HasRevisions, HasCounters",
+ "@last-modified": "2019-09-10T12:25:44.1759382Z"
+ \}
+ \}
+ ],
+ "Includes": \{\},
+ "CounterIncludes": \{
+ "orders/19-A": [
+ \{
+ "DocumentId": "orders/19-A",
+ "CounterName": "MoLtUaE",
+ "TotalValue": 42
+ \}
+ ]
+ \}
+\}
+`}
+
+
+
+(Note that the standard [Northwind data](../../../start/about-examples.mdx) does not contain any [counters](../../../document-extensions/counters/overview.mdx)
+when it is [generated in the studio](../../../studio/database/document-extensions/counters.mdx) - counters were added to "products/48-A" for this example)
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/rest-api/document-commands/get-documents-by-prefix.mdx b/versioned_docs/version-7.1/client-api/rest-api/document-commands/get-documents-by-prefix.mdx
new file mode 100644
index 0000000000..d64809ba4a
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/rest-api/document-commands/get-documents-by-prefix.mdx
@@ -0,0 +1,502 @@
+---
+title: "Get Documents by Prefix"
+hide_table_of_contents: true
+sidebar_label: Get Documents by Prefix
+sidebar_position: 2
+---
+
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+# Get Documents by Prefix
+
+
+* Use this endpoint with the **`GET`** method to retrieve documents from the database by a common prefix in their document IDs:
+`/databases//docs?startsWith=`
+
+* Query parameters can be used to filter and page the results.
+
+* In this page:
+ * [Basic Example](../../../client-api/rest-api/document-commands/get-documents-by-prefix.mdx#basic-example)
+ * [Request Format](../../../client-api/rest-api/document-commands/get-documents-by-prefix.mdx#request-format)
+ * [Response Format](../../../client-api/rest-api/document-commands/get-documents-by-prefix.mdx#response-format)
+ * [More Examples](../../../client-api/rest-api/document-commands/get-documents-by-prefix.mdx#more-examples)
+ * [Get Using `matches`](../../../client-api/rest-api/document-commands/get-documents-by-prefix.mdx#get-using)
+ * [Get Using `matches` and `exclude`](../../../client-api/rest-api/document-commands/get-documents-by-prefix.mdx#get-usingand)
+ * [Get Using `startAfter`](../../../client-api/rest-api/document-commands/get-documents-by-prefix.mdx#get-using-1)
+ * [Page Results](../../../client-api/rest-api/document-commands/get-documents-by-prefix.mdx#page-results)
+ * [Get Document Metadata Only](../../../client-api/rest-api/document-commands/get-documents-by-prefix.mdx#get-document-metadata-only)
+
+## Basic Example
+
+This is a cURL request to retrieve all documents whose IDs begin with the prefix "ship" from a database named "Example" on
+our [playground server](http://live-test.ravendb.net):
+
+
+
+{`curl -X GET "http://live-test.ravendb.net/databases/Example/docs?startsWith=ship"
+`}
+
+
+
+Response:
+
+
+
+{`HTTP/1.1 200 OK
+Server: nginx
+Date: Tue, 10 Sep 2019 15:25:34 GMT
+Content-Type: application/json; charset=utf-8
+Transfer-Encoding: chunked
+Connection: keep-alive
+Content-Encoding: gzip
+ETag: "A:2137-pIhs+72n6USJoZ5XIvTHvQ"
+Vary: Accept-Encoding
+Raven-Server-Version: 4.2.4.42
+
+\{
+ "Results": [
+ \{
+ "Name": "Speedy Express",
+ "Phone": "(503) 555-9831",
+ "@metadata": \{
+ "@collection": "Shippers",
+ "@change-vector": "A:349-k50KTOC5G0mfVXKjomTNFQ",
+ "@id": "shippers/1-A",
+ "@last-modified": "2018-07-27T12:11:53.0317375Z"
+ \}
+ \},
+ \{
+ "Name": "United Package",
+ "Phone": "(503) 555-3199",
+ "@metadata": \{
+ "@collection": "Shippers",
+ "@change-vector": "A:351-k50KTOC5G0mfVXKjomTNFQ",
+ "@id": "shippers/2-A",
+ "@last-modified": "2018-07-27T12:11:53.0317596Z"
+ \}
+ \},
+ \{
+ "Name": "Federal Shipping",
+ "Phone": "(503) 555-9931",
+ "@metadata": \{
+ "@collection": "Shippers",
+ "@change-vector": "A:353-k50KTOC5G0mfVXKjomTNFQ",
+ "@id": "shippers/3-A",
+ "@last-modified": "2018-07-27T12:11:53.0317858Z"
+ \}
+ \}
+ ]
+\}
+`}
+
+
+
+
+
+## Request Format
+
+This is the general format of a cURL request that uses all parameters:
+
+
+
+{`curl -X GET "/databases//docs?
+ startsWith=
+ &matches=||...
+ &exclude=||...
+ &startAfter=
+ &start=
+ &pageSize=
+ &metadataOnly="
+--header "If-None-Match: "
+`}
+
+
+Linebreaks are added for clarity.
+
+
+#### Query String Parameters
+
+| Parameter | Description | Required |
+| - | - | - |
+| **startsWith** | Retrieve all documents whose IDs begin with this string. If the value of this parameter is left empty, all documents in the database are retrieved. | Yes |
+| **matches** | Retrieve documents whose IDs are exactly ``+``. Accepts multiple values separated by a pipe character: ' \| ' . Use `?` to represent any single character, and `*` to represent any string. | No |
+| **exclude** | _Exclude_ documents whose IDs are exactly ``+``. Accepts multiple values separated by a pipe character: ' \| ' . Use `?` to represent any single character, and `*` to represent any string. | No |
+| **startAfter** | Retrieve only the results after the first document ID that begins with this prefix. | No |
+| **start** | Number of results to skip. | No |
+| **pageSize** | Maximum number of results to retrieve. | No |
+| **metadataOnly** | Set this parameter to `true` to retrieve only the document metadata from each result. | No |
+
+#### Headers
+
+| Header | Description | Required |
+| - | - | - |
+| **If-None-Match** | This header takes a hash representing the previous results of an **identical** request. The hash is found in the response header `ETag`. If the results were not modified since the previous request, the server responds with http status code `304` and the requested documents are retrieved from a local cache rather than over the network. | No |
+
+
+
+## Response Format
+
+#### Http Status Codes
+
+| Code | Description |
+| ----------- | - |
+| `200` | Results were successfully retrieved. If no documents with the specified prefix could be found, the results array is empty. |
+| `304` | In response to an `If-None-Match` check: none of the requested documents were modified since they were last loaded, so they were not retrieved from the server. |
+
+#### Headers
+
+| Header | Description |
+| - | - |
+| **Content-Type** | MIME media type and character encoding. This should always be: `application/json; charset=utf-8` |
+| **ETag** | Hash representing the state of these results. If another, **identical** request is made, this hash can be sent in the `If-None-Match` header to check whether the retrieved documents have been modified since the last response. If none were modified. |
+| **Raven-Server-Version** | Version of RavenDB that the responding server is running |
+
+#### Body
+
+Retrieved documents are sorted in ascending [lexical order](https://en.wikipedia.org/wiki/Lexicographical_order) of their
+document IDs. A retrieved document is identical in contents and format to the document stored in the server - unless the
+`metadataOnly` parameter is set to `true`.
+
+This is the general JSON format of the response body:
+
+
+
+{`\{
+ "Results": [
+ \{
+ "":"",
+ ...
+ "@metadata":\{
+ ...
+ \}
+ \},
+ \{ \},
+ ...
+ ]
+\}
+Linebreaks are added for clarity.
+`}
+
+
+
+
+
+## More Examples
+
+[About Northwind](../../../start/about-examples.mdx), the database used in our examples.
+
+In this section:
+
+* [Get Using `matches`](../../../client-api/rest-api/document-commands/get-documents-by-prefix.mdx#get-using)
+* [Get Using `matches` and `exclude`](../../../client-api/rest-api/document-commands/get-documents-by-prefix.mdx#get-usingand)
+* [Get Using `startAfter`](../../../client-api/rest-api/document-commands/get-documents-by-prefix.mdx#get-using-1)
+* [Page Results](../../../client-api/rest-api/document-commands/get-documents-by-prefix.mdx#page-results)
+* [Get Document Metadata Only](../../../client-api/rest-api/document-commands/get-documents-by-prefix.mdx#get-document-metadata-only)
+### Get Using `matches`
+
+cURL request:
+
+
+
+{`curl -X GET "http://live-test.ravendb.net/databases/Example/docs?
+ startsWith=shipp
+ &matches=ers/3-A|ers/1-A"
+`}
+
+
+Linebreaks are added for clarity.
+
+Response:
+
+
+
+{`HTTP/1.1 200 OK
+Server: nginx
+Date: Thu, 12 Sep 2019 10:57:58 GMT
+Content-Type: application/json; charset=utf-8
+Transfer-Encoding: chunked
+Connection: keep-alive
+Content-Encoding: gzip
+ETag: "A:5972-k50KTOC5G0mfVXKjomTNFQ"
+Vary: Accept-Encoding
+Raven-Server-Version: 4.2.4.42
+
+\{
+ "Results": [
+ \{
+ "Name": "Speedy Express",
+ "Phone": "(503) 555-9831",
+ "@metadata": \{
+ "@collection": "Shippers",
+ "@change-vector": "A:349-k50KTOC5G0mfVXKjomTNFQ",
+ "@id": "shippers/1-A",
+ "@last-modified": "2018-07-27T12:11:53.0317375Z"
+ \}
+ \},
+ \{
+ "Name": "Federal Shipping",
+ "Phone": "(503) 555-9931",
+ "@metadata": \{
+ "@collection": "Shippers",
+ "@change-vector": "A:353-k50KTOC5G0mfVXKjomTNFQ",
+ "@id": "shippers/3-A",
+ "@last-modified": "2018-07-27T12:11:53.0317858Z"
+ \}
+ \}
+ ]
+\}
+`}
+
+
+### Get Using `matches` and `exclude`
+
+cURL request:
+
+
+
+{`curl -X GET "http://live-test.ravendb.net/databases/Example/docs?
+ startsWith=shipp
+ &matches=ers/3-A|ers/1-A
+ &exclude=ers/3-A"
+`}
+
+
+Linebreaks are added for clarity.
+
+Response:
+
+
+
+{`HTTP/1.1 200 OK
+Server: nginx
+Date: Thu, 12 Sep 2019 12:24:50 GMT
+Content-Type: application/json; charset=utf-8
+Transfer-Encoding: chunked
+Connection: keep-alive
+Content-Encoding: gzip
+ETag: "A:5972-k50KTOC5G0mfVXKjomTNFQ"
+Vary: Accept-Encoding
+Raven-Server-Version: 4.2.4.42
+
+\{
+ "Results": [
+ \{
+ "Name": "Speedy Express",
+ "Phone": "(503) 555-9831",
+ "@metadata": \{
+ "@collection": "Shippers",
+ "@change-vector": "A:349-k50KTOC5G0mfVXKjomTNFQ",
+ "@id": "shippers/1-A",
+ "@last-modified": "2018-07-27T12:11:53.0317375Z"
+ \}
+ \}
+ ]
+\}
+`}
+
+
+### Get Using `startAfter`
+
+cURL request:
+
+
+
+{`curl -X GET "http://live-test.ravendb.net/databases/Example/docs?
+ startsWith=shipp
+ startAfter=shippers/1-A"
+`}
+
+
+Linebreaks are added for clarity.
+
+Response:
+
+
+
+{`HTTP/1.1 200 OK
+Server: nginx
+Date: Thu, 12 Sep 2019 12:37:39 GMT
+Content-Type: application/json; charset=utf-8
+Transfer-Encoding: chunked
+Connection: keep-alive
+Content-Encoding: gzip
+ETag: "A:5972-k50KTOC5G0mfVXKjomTNFQ"
+Vary: Accept-Encoding
+Raven-Server-Version: 4.2.4.42
+
+\{
+ "Results": [
+ \{
+ "Name": "United Package",
+ "Phone": "(503) 555-3199",
+ "@metadata": \{
+ "@collection": "Shippers",
+ "@change-vector": "A:351-k50KTOC5G0mfVXKjomTNFQ",
+ "@id": "shippers/2-A",
+ "@last-modified": "2018-07-27T12:11:53.0317596Z"
+ \}
+ \},
+ \{
+ "Name": "Federal Shipping",
+ "Phone": "(503) 555-9931",
+ "@metadata": \{
+ "@collection": "Shippers",
+ "@change-vector": "A:353-k50KTOC5G0mfVXKjomTNFQ",
+ "@id": "shippers/3-A",
+ "@last-modified": "2018-07-27T12:11:53.0317858Z"
+ \}
+ \}
+ ]
+\}
+`}
+
+
+### Page Results
+
+cURL request:
+
+
+
+{`curl -X GET "http://live-test.ravendb.net/databases/Example/docs?
+ startsWith=product
+ &start=50
+ &pageSize=2"
+`}
+
+
+Linebreaks are added for clarity.
+
+Response:
+
+
+
+{`HTTP/1.1 200 OK
+Server: nginx
+Date: Thu, 12 Sep 2019 13:17:44 GMT
+Content-Type: application/json; charset=utf-8
+Transfer-Encoding: chunked
+Connection: keep-alive
+Content-Encoding: gzip
+ETag: "A:5972-k50KTOC5G0mfVXKjomTNFQ"
+Vary: Accept-Encoding
+Raven-Server-Version: 4.2.4.42
+
+\{
+ "Results": [
+ \{
+ "Name": "Pâté chinois",
+ "Supplier": "suppliers/25-A",
+ "Category": "categories/6-A",
+ "QuantityPerUnit": "24 boxes x 2 pies",
+ "PricePerUnit": 24.0000,
+ "UnitsInStock": 25,
+ "UnitsOnOrder": 115,
+ "Discontinued": false,
+ "ReorderLevel": 20,
+ "@metadata": \{
+ "@collection": "Products",
+ "@change-vector": "A:8170-k50KTOC5G0mfVXKjomTNFQ, A:1887-0N64iiIdYUKcO+yq1V0cPA, A:6214-xwmnvG1KBkSNXfl7/0yJ1A",
+ "@id": "products/55-A",
+ "@last-modified": "2018-07-27T12:11:53.0303784Z"
+ \}
+ \},
+ \{
+ "Name": "Gnocchi di nonna Alice",
+ "Supplier": "suppliers/26-A",
+ "Category": "categories/5-A",
+ "QuantityPerUnit": "24 - 250 g pkgs.",
+ "PricePerUnit": 38.0000,
+ "UnitsInStock": 26,
+ "UnitsOnOrder": 21,
+ "Discontinued": false,
+ "ReorderLevel": 30,
+ "@metadata": \{
+ "@collection": "Products",
+ "@change-vector": "A:8172-k50KTOC5G0mfVXKjomTNFQ, A:1887-0N64iiIdYUKcO+yq1V0cPA, A:6214-xwmnvG1KBkSNXfl7/0yJ1A",
+ "@id": "products/56-A",
+ "@last-modified": "2018-07-27T12:11:53.0304385Z"
+ \}
+ \}
+ ]
+\}
+`}
+
+
+
+Note that the document ID numbers are 55 and 56 rather than the expected 51 and 52 because results are sorted in lexical order.
+### Get Document Metadata Only
+
+cURL request:
+
+
+
+{`curl -X GET "http://live-test.ravendb.net/databases/Example/docs?
+ startsWith=regio
+ &metadataOnly=true"
+`}
+
+
+Linebreaks are added for clarity.
+
+Response:
+
+
+
+{`HTTP/1.1 200 OK
+Server: nginx
+Date: Thu, 12 Sep 2019 13:44:16 GMT
+Content-Type: application/json; charset=utf-8
+Transfer-Encoding: chunked
+Connection: keep-alive
+Content-Encoding: gzip
+ETag: "A:5972-k50KTOC5G0mfVXKjomTNFQ"
+Vary: Accept-Encoding
+Raven-Server-Version: 4.2.4.42
+
+\{
+ "Results": [
+ \{
+ "@metadata": \{
+ "@collection": "Regions",
+ "@change-vector": "A:9948-k50KTOC5G0mfVXKjomTNFQ, A:1887-0N64iiIdYUKcO+yq1V0cPA, A:6214-xwmnvG1KBkSNXfl7/0yJ1A",
+ "@id": "regions/1-A",
+ "@last-modified": "2018-07-27T12:11:53.2016685Z"
+ \}
+ \},
+ \{
+ "@metadata": \{
+ "@collection": "Regions",
+ "@change-vector": "A:9954-k50KTOC5G0mfVXKjomTNFQ, A:1887-0N64iiIdYUKcO+yq1V0cPA, A:6214-xwmnvG1KBkSNXfl7/0yJ1A",
+ "@id": "regions/2-A",
+ "@last-modified": "2018-07-27T12:11:53.2021826Z"
+ \}
+ \},
+ \{
+ "@metadata": \{
+ "@collection": "Regions",
+ "@change-vector": "A:9950-k50KTOC5G0mfVXKjomTNFQ, A:1887-0N64iiIdYUKcO+yq1V0cPA, A:6214-xwmnvG1KBkSNXfl7/0yJ1A",
+ "@id": "regions/3-A",
+ "@last-modified": "2018-07-27T12:11:53.2018086Z"
+ \}
+ \},
+ \{
+ "@metadata": \{
+ "@collection": "Regions",
+ "@change-vector": "A:9952-k50KTOC5G0mfVXKjomTNFQ, A:1887-0N64iiIdYUKcO+yq1V0cPA, A:6214-xwmnvG1KBkSNXfl7/0yJ1A",
+ "@id": "regions/4-A",
+ "@last-modified": "2018-07-27T12:11:53.2019223Z"
+ \}
+ \}
+ ]
+\}
+`}
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/rest-api/document-commands/put-documents.mdx b/versioned_docs/version-7.1/client-api/rest-api/document-commands/put-documents.mdx
new file mode 100644
index 0000000000..6c8bd2163f
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/rest-api/document-commands/put-documents.mdx
@@ -0,0 +1,206 @@
+---
+title: "Put a Document"
+hide_table_of_contents: true
+sidebar_label: Put a Document
+sidebar_position: 3
+---
+
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+# Put a Document
+
+
+* Use this endpoint with the **`PUT`** method to upload a new document to the database, or update an existing one:
+`/databases//docs`
+
+* In this page:
+ * [Examples](../../../client-api/rest-api/document-commands/put-documents.mdx#examples)
+ * [Request Format](../../../client-api/rest-api/document-commands/put-documents.mdx#request-format)
+ * [Request Body](../../../client-api/rest-api/document-commands/put-documents.mdx#request-body)
+ * [Response Format](../../../client-api/rest-api/document-commands/put-documents.mdx#response-format)
+
+
+## Examples
+
+These are cURL requests to a database named "Example" on our [playground server](http://live-test.ravendb.net) to store and
+then modify a document.
+
+#### 1) Store a new document "person/1-A" in the collection "People"
+
+
+
+{`curl -X PUT "http://live-test.ravendb.net/databases/Example/docs?id=person/1-A"
+-d "\{
+ \\"FirstName\\":\\"Jane\\",
+ \\"LastName\\":\\"Doe\\",
+ \\"Age\\":42,
+ \\"@metadata\\":\{
+ \\"@collection\\":\\"People\\"
+ \}
+\}"
+`}
+
+
+Linebreaks are added for clarity.
+
+Response:
+
+
+
+{`HTTP/1.1 201
+status: 201
+Server: nginx
+Date: Tue, 27 Aug 2019 10:58:28 GMT
+Content-Type: application/json; charset=utf-8
+Transfer-Encoding: chunked
+Connection: keep-alive
+Content-Encoding: gzip
+Vary: Accept-Encoding
+Raven-Server-Version: 4.2.3.42
+
+\{
+ "Id":"person/1-A",
+ "ChangeVector":"A:1"
+\}
+`}
+
+
+
+#### 2) Update that same document
+
+
+
+{`curl -X PUT "http://live-test.ravendb.net/databases/Example/docs?id=person/1-A"
+--header "If-Match: A:1-L8hp6eYcA02dkVIEifGfKg"
+-d "\{
+ \\"FirstName\\":\\"John\\",
+ \\"LastName\\":\\"Smith\\",
+ \\"Age\\":24,
+ \\"@metadata\\":\{
+ \\"@collection\\": \\"People\\"
+ \}
+\}"
+`}
+
+
+
+The response is the same as the previous response except for the updated change vector:
+
+
+
+{`HTTP/1.1 201
+status: 201
+Server: nginx
+Date: Tue, 27 Aug 2019 10:59:54 GMT
+Content-Type: application/json; charset=utf-8
+Transfer-Encoding: chunked
+Connection: keep-alive
+Content-Encoding: gzip
+Vary: Accept-Encoding
+Raven-Server-Version: 4.2.3.42
+
+\{
+ "Id":"person/1-A",
+ "ChangeVector":"A:3"
+\}
+`}
+
+
+
+
+
+## Request Format
+
+This is the general format of the cURL request:
+
+
+
+{`curl -X PUT "/databases//docs?id="
+--header "If-Match: "
+-d ""
+`}
+
+
+
+#### Query String Parameters
+
+| Parameter | Description | Required |
+| - | - | - |
+| **id** | Unique ID under which the new document will be stored, or the ID of an existing document to be updated | Yes |
+
+#### Headers
+
+| Header | Description | Required |
+| - | - | - |
+| **If-Match** | When updating an existing document, this header passes the document's expected [change vector](../../../server/clustering/replication/change-vector.mdx). If this change vector doesn't match the document's server-side change vector, a concurrency exception is thrown. | No |
+
+#### Request Body
+
+The body contains a JSON document. This will replace the existing document with the specified ID if one exists. Otherwise,
+it will become a new document with the specified ID.
+
+
+
+{`\{
+ \\"\\": \\"\\",
+ ...
+ \\"@metadata\\": \{
+ \\"@collection\\": \\"\\",
+ ...
+ \}
+\}
+`}
+
+
+Depending on the shell you're using to run cURL, you will probably need to escape all double quotes within the request body
+using a backslash: `"` -> `\"`.
+
+When updating an existing document, you'll need to include its [collection](../../../client-api/faq/what-is-a-collection.mdx)
+name in the metadata or an exception will be thrown. Exceptions to this rule are documents in the collection `@empty` -
+i.e. not in any collection. A document's collection cannot be modified.
+
+Another way to make this request is to save your document as a file (such as a `.txt`), and pass the path to that file in
+the request body:
+
+
+
+{`curl -X PUT "/databases//docs?id="
+-d "<@path/to/yourDocument.txt>"
+`}
+
+
+
+
+
+## Response Format
+
+The response body is JSON and contains the document ID and current [change vector](../../../server/clustering/replication/change-vector.mdx):
+
+
+
+{`\{
+ "Id": "",
+ "ChangeVector": ""
+\}
+`}
+
+
+
+| Header | Description |
+| - | - |
+| **Content-Type** | MIME media type and character encoding. This should always be: `application/json; charset=utf-8`. |
+| **Raven-Server-Version** | Version of RavenDB the responding server is running |
+
+| HTTP Status Code | Description |
+| - | - |
+| `201` | The document was successfully stored / updated |
+| `409` | The change vector submitted did not match the server-side change vector. A concurrency exception is thrown. |
+| `500` | Server error, e.g. when the submitted document's collection tag did not match the specified document's collection tag. |
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/rest-api/queries/_category_.json b/versioned_docs/version-7.1/client-api/rest-api/queries/_category_.json
new file mode 100644
index 0000000000..b79f52fd77
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/rest-api/queries/_category_.json
@@ -0,0 +1,4 @@
+{
+ "position": 2,
+ "label": Queries,
+}
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/rest-api/queries/delete-by-query.mdx b/versioned_docs/version-7.1/client-api/rest-api/queries/delete-by-query.mdx
new file mode 100644
index 0000000000..823ee5f7ed
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/rest-api/queries/delete-by-query.mdx
@@ -0,0 +1,140 @@
+---
+title: "Delete By Query"
+hide_table_of_contents: true
+sidebar_label: Delete by Query
+sidebar_position: 1
+---
+
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+# Delete By Query
+
+
+* Use this endpoint with the **`DELETE`** method to delete all documents that satisfy a query:
+`/databases//queries`
+
+* In this page:
+ * [Example](../../../client-api/rest-api/queries/delete-by-query.mdx#example)
+ * [Request Format](../../../client-api/rest-api/queries/delete-by-query.mdx#request-format)
+ * [Response Format](../../../client-api/rest-api/queries/delete-by-query.mdx#response-format)
+
+
+## Example
+
+This cURL request sends a query to a database named "Example" on our [playground server](http://live-test.ravendb.net). The
+results of this query - in this case, one document named "employees/1-A" - are all deleted.
+
+
+
+{`curl -X DELETE "http://live-test.ravendb.net/databases/Example/queries"
+-d "\{ \\"Query\\": \\"from Employees where FirstName = 'Nancy'\\" \}"
+`}
+
+
+Linebreaks are added for clarity.
+
+Response:
+
+
+
+{`HTTP/1.1 200 OK
+Server: nginx
+Date: Sun, 24 Nov 2019 12:21:11 GMT
+Content-Type: application/json; charset=utf-8
+Transfer-Encoding: chunked
+Connection: keep-alive
+Content-Encoding: gzip
+Vary: Accept-Encoding
+Raven-Server-Version: 4.2.5.42
+Request-Time: 5
+
+\{
+ "OperationId": 42,
+ "OperationNodeTag": "A"
+\}
+`}
+
+
+
+
+
+## Request Format
+
+This the general format of a cURL request that uses all query string parameters:
+
+
+
+{`curl -X GET "/databases//docs?
+ allowStale=
+ &staleTimeout=
+ &maxOpsPerSec="
+-d "\{ \}"
+`}
+
+
+
+#### Query String Parameters
+
+| Option | Description |
+| - | - |
+| **allowStale** | If the query is on an index (rather than a collection), this determines whether to delete results from a [stale index](../../../indexes/stale-indexes.mdx). If set to `false` and the specified index is stale, an exception is thrown. Default: `false`. |
+| **staleTimeout** | If `allowStale` is set to `false`, this parameter sets the amount of time to wait for the index not to be stale. If the time runs out, an exception is thrown. The value is of type [TimeSpan](https://docs.microsoft.com/en-us/dotnet/api/system.timespan). Default: `null` - if the index is stale the exception is thrown immediately. |
+| **maxOpsPerSec** | The maximum number of deletions per second the server can perform in the background. Default: no limit. |
+
+#### Body
+
+This is the general format of the request body:
+
+
+
+{`-d "\{
+ \\"Query\\": \\">\\",
+ \\"QueryParameters\\": \{
+ \\"\\":\\"\\",
+ ...
+ \}
+\}"
+`}
+
+
+Depending on the shell you're using to run cURL, you will probably need to escape all
+double quotes within the request body using a backslash: `"` -> `\"`.
+
+| Parameter | Description |
+| - | - |
+| **Query** | A query in [RQL](../../../client-api/session/querying/what-is-rql.mdx). You can insert parameters from the `QueryParameters` object with `$` |
+| **QueryParameters** | A list of values that can be used in the query, such as strings, ints, or documents IDs. Inputs from your users should always be passed as query parameters to avoid SQL injection attacks, and in general it's best practice to pass all your right-hand operands as parameters. |
+
+
+## Response Format
+
+#### Http Status Codes
+
+| Code | Description |
+| - | - |
+| `200` | The request was valid. This includes the case where the query found 0 results, or the specified index does not exist, etc. |
+| `500` | Bad request or server-side exception |
+
+#### Body
+
+
+
+{`\{
+ "OperationId": ,
+ "OperationNodeTag": ""
+\}
+`}
+
+
+
+| Field | Description |
+| - | - |
+| **OperationId** | Increments each time the server recieves a new Operation to execute, including as `DeleteByQuery` and `PatchByQuery` |
+| **OperationNodeTag** | The tag of the Cluster Node that first received the Delete by Query Operation. Values are `A` to `Z`. See [Cluster Topology](../../../server/clustering/rachis/cluster-topology.mdx). |
+
+
diff --git a/versioned_docs/version-7.1/client-api/rest-api/queries/patch-by-query.mdx b/versioned_docs/version-7.1/client-api/rest-api/queries/patch-by-query.mdx
new file mode 100644
index 0000000000..247a39218f
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/rest-api/queries/patch-by-query.mdx
@@ -0,0 +1,145 @@
+---
+title: "Patch By Query"
+hide_table_of_contents: true
+sidebar_label: Patch by Query
+sidebar_position: 2
+---
+
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+# Patch By Query
+
+
+* Use this endpoint with the **`PATCH`** method to update all documents that satisfy a query:
+`/databases//queries`
+
+* [Patching](../../../client-api/operations/patching/set-based.mdx) occurs on the server side.
+
+* In this page:
+ * [Example](../../../client-api/rest-api/queries/patch-by-query.mdx#example)
+ * [Request Format](../../../client-api/rest-api/queries/patch-by-query.mdx#request-format)
+ * [Response Format](../../../client-api/rest-api/queries/patch-by-query.mdx#response-format)
+
+
+## Example
+
+This cURL request sends a query with an `update` clause to a database named "Example" on our
+[playground server](http://live-test.ravendb.net). The results of this query will each be modified on the server side.
+
+
+
+{`curl -X PATCH "http://live-test.ravendb.net/databases/Example/queries"
+-d "\{ \\"Query\\": \{ \\"Query\\": \\"from Employees as E update\{ E.FirstName = 'Bob' \}\\" \} \}"
+`}
+
+
+Linebreaks are added for clarity.
+
+Response:
+
+
+
+{`HTTP/1.1 200 OK
+Server: nginx
+Date: Sun, 24 Nov 2019 12:24:51 GMT
+Content-Type: application/json; charset=utf-8
+Transfer-Encoding: chunked
+Connection: keep-alive
+Content-Encoding: gzip
+Vary: Accept-Encoding
+Raven-Server-Version: 4.2.5.42
+Request-Time: 5
+
+\{
+ "OperationId": 42,
+ "OperationNodeTag": "A"
+\}
+`}
+
+
+
+
+
+## Request Format
+
+This the general format of a cURL request that uses all query string parameters:
+
+
+
+{`curl -X GET "/databases//docs?
+ allowStale=
+ &staleTimeout=
+ &maxOpsPerSec="
+-d "\{ \}"
+`}
+
+
+
+#### Query String Parameters
+
+| Option | Description |
+| - | - |
+| **allowStale** | If the query is on an index (rather than a collection), this determines whether to patch results from a [stale index](../../../indexes/stale-indexes.mdx). If set to `false` and the specified index is stale, an exception is thrown. Default: `false`. |
+| **staleTimeout** | If `allowStale` is set to `false`, this parameter sets the amount of time to wait for the index not to be stale. If the time runs out, an exception is thrown. The value is of type [TimeSpan](https://docs.microsoft.com/en-us/dotnet/api/system.timespan). Default: `null` - if the index is stale the exception is thrown immediately. |
+| **maxOpsPerSec** | The maximum number of patches per second the server can perform in the background. Default: no limit. |
+
+#### Body
+
+This is the general format of the request body:
+
+
+
+{`-d "\{
+ \\"Query\\": \{
+ \\"Query\\": \\">\\",
+ \\"QueryParameters\\": \{
+ \\"\\":\\"\\",
+ ...
+ \}
+ \}
+\}"
+`}
+
+
+Depending on the shell you're using to run cURL, you will probably need to escape all
+double quotes within the request body using a backslash: `"` -> `\"`.
+
+| Parameter | Description |
+| - | - |
+| **Query** | A query in [RQL](../../../client-api/session/querying/what-is-rql.mdx). You can insert parameters from the `QueryParameters` object with `$` |
+| **QueryParameters** | A list of values that can be used in the query, such as strings, ints, or documents IDs. Inputs from your users should always be passed as query parameters to avoid SQL injection attacks, and in general it's best practice to pass all your right-hand operands as parameters. |
+
+
+## Response Format
+
+#### Http Status Codes
+
+| Code | Description |
+| - | - |
+| `200` | The request was valid. This includes the case where the query found 0 results, or the specified index does not exist, etc. |
+| `400` | Bad request |
+| `500` | Server-side exception |
+
+#### Body
+
+
+
+{`\{
+ "OperationId": ,
+ "OperationNodeTag": ""
+\}
+`}
+
+
+
+| Field | Description |
+| - | - |
+| **OperationId** | Increments each time the server recieves a new Operation to execute, such as `DeleteByQuery` or `PatchByQuery` |
+| **OperationNodeTag** | The tag of the Cluster Node that first received the Patch by Query Operation. Values are `A` to `Z`. See [Cluster Topology](../../../server/clustering/rachis/cluster-topology.mdx). |
+
+
diff --git a/versioned_docs/version-7.1/client-api/rest-api/queries/query-the-database.mdx b/versioned_docs/version-7.1/client-api/rest-api/queries/query-the-database.mdx
new file mode 100644
index 0000000000..431ce41a26
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/rest-api/queries/query-the-database.mdx
@@ -0,0 +1,616 @@
+---
+title: "Query the Database"
+hide_table_of_contents: true
+sidebar_label: Query the Database
+sidebar_position: 0
+---
+
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+# Query the Database
+
+
+* Use this endpoint with the **`POST`** method to query the database:
+`/databases//queries`
+
+* Queries are written in [RQL](../../../client-api/session/querying/what-is-rql.mdx), our user friendly SQL-like query language.
+
+* In this page:
+ * [Basic Example](../../../client-api/rest-api/queries/query-the-database.mdx#basic-example)
+ * [Request Format](../../../client-api/rest-api/queries/query-the-database.mdx#request-format)
+ * [Response Format](../../../client-api/rest-api/queries/query-the-database.mdx#response-format)
+ * [More Examples](../../../client-api/rest-api/queries/query-the-database.mdx#more-examples)
+
+
+## Basic Example
+
+This cURL request queries the [collection](../../../client-api/faq/what-is-a-collection.mdx) `Shippers` in a database named
+"Example" on our [playground server](http://live-test.ravendb.net).
+The response contains all documents from this collection.
+
+
+
+{`curl -X POST "http://live-test.ravendb.net/databases/Example/queries"
+-d "\{ \\"Query\\": \\"from Shippers\\" \}"
+`}
+
+
+Linebreaks are added for clarity.
+
+Response:
+
+
+
+{`HTTP/1.1 200 OK
+Date: Wed, 06 Nov 2019 15:54:15 GMT
+Content-Type: application/json; charset=utf-8
+Server: Kestrel
+ETag: -786759538542975908
+Vary: Accept-Encoding
+Raven-Server-Version: 4.1.9.41023
+Request-Time: 0
+Content-Length: 1103
+
+\{
+ "TotalResults": 3,
+ "SkippedResults": 0,
+ "DurationInMs": 0,
+ "IncludedPaths": null,
+ "IndexName": "collection/Shippers",
+ "Results": [
+ \{
+ "Name": "Speedy Express",
+ "Phone": "(503) 555-9831",
+ "@metadata": \{
+ "@collection": "Shippers",
+ "@change-vector": "A:8529-+pXj/MXEzkeiuFCvLdipcw, A:1887-0N64iiIdYUKcO+yq1V0cPA, A:6214-xwmnvG1KBkSNXfl7/0yJ1A",
+ "@id": "shippers/1-A",
+ "@last-modified": "2018-07-27T12:11:53.0317375Z"
+ \}
+ \},
+ \{
+ "Name": "United Package",
+ "Phone": "(503) 555-3199",
+ "@metadata": \{
+ "@collection": "Shippers",
+ "@change-vector": "A:8531-+pXj/MXEzkeiuFCvLdipcw, A:1887-0N64iiIdYUKcO+yq1V0cPA, A:6214-xwmnvG1KBkSNXfl7/0yJ1A",
+ "@id": "shippers/2-A",
+ "@last-modified": "2018-07-27T12:11:53.0317596Z"
+ \}
+ \},
+ \{
+ "Name": "Federal Shipping",
+ "Phone": "(503) 555-9931",
+ "@metadata": \{
+ "@collection": "Shippers",
+ "@change-vector": "A:8533-+pXj/MXEzkeiuFCvLdipcw, A:1887-0N64iiIdYUKcO+yq1V0cPA, A:6214-xwmnvG1KBkSNXfl7/0yJ1A",
+ "@id": "shippers/3-A",
+ "@last-modified": "2018-07-27T12:11:53.0317858Z"
+ \}
+ \}
+ ],
+ "Includes": \{\},
+ "IndexTimestamp": "0001-01-01T00:00:00.0000000",
+ "LastQueryTime": "0001-01-01T00:00:00.0000000",
+ "IsStale": false,
+ "ResultEtag": -786759538542975908,
+ "NodeTag": "A"
+\}
+`}
+
+
+
+
+
+## Request Format
+
+This is the general format of a cURL request that uses all query string parameters:
+
+
+
+{`curl -X POST "/databases//queries?
+ metadataOnly=
+ &includeServerSideQuery=
+ &debug="
+--header "If-None-Match: "
+-d "\{ \}"
+`}
+
+
+Linebreaks are added for clarity.
+
+
+#### Query String Parameters
+
+| Parameter | Description | Required |
+| - | - | - |
+| **metadataOnly** | Set this parameter to `true` to retrieve only the document metadata from each result | No |
+| **includeServerSideQuery** | Adds the RQL query that is run on the server side, which may look slightly different than the query sent | No |
+| **debug** | Takes one of several values - listed in the table below - that modify the results or add information | No |
+
+#### Values of `debug` parameter
+
+| Value | Description |
+|-------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **entries** | Returns the index entries instead of the complete documents, meaning only those fields that are indexed by the queried index |
+| **explain** | Used for queries on [Auto Indexes](../../../indexes/creating-and-deploying.mdx#auto-indexes). Returns _just_ the name of an existing index that can be used to satisfy this query. If no appropriate index could be found, returns the next best index with an explanation of why it is not appropriate for this query - e.g. it does not index the necessary fields. If no index was found, this query will _not_ trigger the creation of an auto index as it normally would. |
+| **serverSideQuery** | Returns _just_ the RQL query that is run on the server side, which may look slightly different than the query sent |
+| **graph** | Returns [Graph Query](../../../indexes/querying/graph/graph-queries-overview.mdx) results analyzed as nodes and edges |
+| **detailedGraphResult** | Returns [Graph Query](../../../indexes/querying/graph/graph-queries-overview.mdx) results arranged by their corresponding parts of the query |
+
+#### Headers
+
+| Header | Description |
+|-------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **If-None-Match** | This optional header tells the server to check whether the requested data has been changed since the last request.
To use it, insert the value of the header `ResponseEtag` from the response to your previous query. This value is a hash of type `long` that represents the state of the index or collection that satisfied the query. If that index or collection has not been updated, the server will respond with http status code `304` and no results will be retrieved.
Note that this is regardless of the content of the query itself. |
+
+#### Body
+
+This is the general format of the request body:
+
+
+
+{`-d "\{
+ \\"Query\\": \\">\\",
+ \\"QueryParameters\\": \{
+ \\"\\":\\"\\",
+ ...
+ \}
+\}"
+`}
+
+
+
+Depending on the shell you're using to run cURL,
+you will probably need to escape all double quotes within the request body using a backslash: `"` -> `\"`.
+
+| Parameter | Description |
+|---------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **Query** | A query in [RQL](../../../client-api/session/querying/what-is-rql.mdx). You can insert parameters from the `QueryParameters` object with `$` |
+| **QueryParameters** | A list of values that can be used in the query, such as strings, ints, or documents IDs. Inputs from your users should always be passed as query parameters to avoid SQL injection attacks, and in general it's best practice to pass all your right-hand operands as parameters. |
+
+## Response Format
+
+#### Http Status Codes
+
+| Code | Description |
+| - | - |
+| `200` | Results are successfully retrieved, including the case where there are 0 results |
+| `304` | In response to a query with the `If-None-Match` header: the same index was used to satisfy the query, and none of the requested documents were modified since they were last loaded, so they were not retrieved from the server. (They are retrieved from the local cache instead). |
+| `404` | The specified index could not be found. In the case where a specified collection could not be found, see status code `200`. |
+| `500` | Invalid query or server-side exception |
+
+
+#### Body
+
+
+
+{`\{
+ "TotalResults": ,
+ "SkippedResults": ,
+ "CappedMaxResults": ,
+ "DurationInMs": ,
+ "IncludedPaths": [
+ "",
+ ...
+ ],
+ "IndexName": "",
+ "Results": [
+ \{
+
+ \},
+ ...
+ ],
+ "Includes":
+ "": \{
+
+ \},
+ "": \{ \},
+ ...
+ \},
+ "IndexTimestamp": "",
+ "LastQueryTime": "",
+ "IsStale": ,
+ "ResultEtag": ,
+ "NodeTag": "",
+ "Timings": \{ \},
+ "ServerSideQuery"
+\}
+`}
+
+
+
+| Field | Description |
+| - | - |
+| **TotalResults** | The total number of results of the query |
+| **CappedMaxResults** | The number of results retrieved after the [maximum page size](../../../indexes/querying/paging.mdx) is applied. If paging was not used, this field does not appear. |
+| **SkippedResults** | The number of results that were skipped, e.g. because there were [duplicates](../../../indexes/querying/distinct.mdx) |
+| **DurationInMs** | Number of milliseconds it took to satisfy the query on the server side |
+| **IncludedPaths** | Array of the paths within the queried documents to the [related document](../../../client-api/how-to/handle-document-relationships.mdx#includes) IDs. Default: `null` |
+| **IndexName** | Name of the index used to satisfy the query |
+| **Results** | List of documents returned by the query, sorted in ascending order of their [change vectors](../../../server/clustering/replication/change-vector.mdx) |
+| **Includes** | List of included documents returned by the query, sorted in ascending alphabetical order |
+| **IndexTimestamp** | The last time the index was updated. [DateTime format](https://docs.microsoft.com/en-us/dotnet/api/system.datetime) |
+| **LastQueryTime** | The last time the index was queried. This includes the case where the most recent query occurred after this query. |
+| **IsStale** | Whether the results are [stale](../../../indexes/stale-indexes.mdx) |
+| **ResultEtag** | A hash of type `long` representing the results. When making another request identical to this one, this value can be sent in the `If-None-Match` header to check whether the results have been modified since this response. If not, the results will be retrieved from a local cache instead of from the server. |
+| **NodeTag** | The tag of the Cluster Node that responded to the query. Values are `A` to `Z`. See [Cluster Topology](../../../server/clustering/rachis/cluster-topology.mdx). |
+| **Timings** | If [requested](../../../client-api/session/querying/debugging/query-timings.mdx), the duration of the query operation and each of its sub-stages. See the structure of the [`Timings` object](../../../client-api/rest-api/queries/query-the-database.mdx#the--object) and the [timings example](../../../client-api/rest-api/queries/query-the-database.mdx#get-timing-details) below. |
+
+#### The `Timings` Object
+
+`Timings` tells you the duration of the whole query operation, including a breakdown of the different stages and sub-stages of the
+operation. Examples of these stages might be the query itself or the amount of time the server waited for an index not to be stale.
+These are the durations on the server side, not including the transfer over the network.
+
+The `Timings` object itself has a hierarchical structure, with each stage containing a list of sub-stages, which contain their
+own lists, and so on. Each stage contains a `DurationInMs` field with the total number of milliseconds the stage took, and a field
+called `Timings` which contains the list of sub-stages. If a stage has no sub-stages, the value of its `Timings` field is `null`.
+
+At every level of this structure, stages are listed in _alphabetical order_ of the stage's names. The durations of sub-stages only
+roughly add up to the duration of the parent stage because `DurationInMs` values are rounded to the nearest whole number.
+
+
+
+{`"Timings": \{
+ "DurationInMs": ,
+ "Timings": \{
+ "": \{
+ "DurationInMs": ,
+ "Timings": \{
+ "": \{
+ "DurationInMs": ,
+ "Timings": \{
+ "": \{
+ \},
+ ...
+ \},
+ "": \{ \},
+ ...
+ \}
+\}
+`}
+
+
+
+
+
+## More Examples
+
+[About Northwind](../../../start/about-examples.mdx), the database used in our examples.
+
+In this section:
+
+* [Include Related Documents](../../../client-api/rest-api/queries/query-the-database.mdx#include-related-documents)
+* [Page Results](../../../client-api/rest-api/queries/query-the-database.mdx#page-results)
+* [Get Timing Details](../../../client-api/rest-api/queries/query-the-database.mdx#get-timing-details)
+### Include Related Documents
+
+This query tells the server to include a [related document](../../../client-api/how-to/handle-document-relationships.mdx#includes).
+
+Paths within documents can be passed as a `string` (`'Address.City'`), or directly (`Address.City`) as in this query. When writing
+paths as a `string` keep in mind [these conventions](../../../client-api/how-to/handle-document-relationships.mdx#path-conventions).
+
+Request:
+
+
+
+{`curl -X POST "http://live-test.ravendb.net/databases/Example/queries"
+-d "\{ \\"Query\\": \\"from Products where Name = 'Chocolade' include Supplier, Category\\" \}"
+`}
+
+
+
+Response:
+
+
+
+{`HTTP/1.1 200 OK
+Server: nginx
+Date: Thu, 21 Nov 2019 14:55:59 GMT
+Content-Type: application/json; charset=utf-8
+Transfer-Encoding: chunked
+Connection: keep-alive
+Content-Encoding: gzip
+ETag: -829128196141269816
+Vary: Accept-Encoding
+Raven-Server-Version: 4.2.5.42
+Request-Time: 166
+
+\{
+ "TotalResults": 1,
+ "SkippedResults": 0,
+ "DurationInMs": 165,
+ "IncludedPaths": [
+ "Supplier",
+ "Category"
+ ],
+ "IndexName": "Auto/Products/ByName",
+ "Results": [
+ \{
+ "Name": "Chocolade",
+ "Supplier": "suppliers/22-A",
+ "Category": "categories/3-A",
+ "QuantityPerUnit": "10 pkgs.",
+ "PricePerUnit": 12.7500,
+ "UnitsInStock": 22,
+ "UnitsOnOrder": 15,
+ "Discontinued": false,
+ "ReorderLevel": 25,
+ "@metadata": \{
+ "@collection": "Products",
+ "@change-vector": "A:285-axxGtO/AJUGOLMLrpcu8hA",
+ "@id": "products/48-A",
+ "@index-score": 4.65065813064575,
+ "@last-modified": "2018-07-27T12:11:53.0300420Z"
+ \}
+ \}
+ ],
+ "Includes": \{
+ "suppliers/22-A": \{
+ "Contact": \{
+ "Name": "Dirk Luchte",
+ "Title": "Accounting Manager"
+ \},
+ "Name": "Zaanse Snoepfabriek",
+ "Address": \{
+ "Line1": "Verkoop Rijnweg 22",
+ "Line2": null,
+ "City": "Zaandam",
+ "Region": null,
+ "PostalCode": "9999 ZZ",
+ "Country": "Netherlands",
+ "Location": null
+ \},
+ "Phone": "(12345) 1212",
+ "Fax": "(12345) 1210",
+ "HomePage": null,
+ "@metadata": \{
+ "@collection": "Suppliers",
+ "@change-vector": "A:399-axxGtO/AJUGOLMLrpcu8hA",
+ "@id": "suppliers/22-A",
+ "@last-modified": "2018-07-27T12:11:53.0335729Z"
+ \}
+ \},
+ "categories/3-A": \{
+ "Name": "Confections",
+ "Description": "Desserts, candies, and sweet breads",
+ "@metadata": \{
+ "@attachments": [
+ \{
+ "Name": "image.jpg",
+ "Hash": "1QxSMa3tBr+y8wQYNre7E9UJFFVTNWGjVoC+IC+gSSs=",
+ "ContentType": "image/jpeg",
+ "Size": 47955
+ \}
+ ],
+ "@collection": "Categories",
+ "@change-vector": "A:2092-axxGtO/AJUGOLMLrpcu8hA",
+ "@flags": "HasAttachments",
+ "@id": "categories/3-A",
+ "@last-modified": "2018-07-27T12:16:44.1738714Z"
+ \}
+ \}
+ \},
+ "IndexTimestamp": "2019-11-21T14:55:59.4797461",
+ "LastQueryTime": "2019-11-21T14:55:59.4847597",
+ "IsStale": false,
+ "ResultEtag": -829128196141269816,
+ "NodeTag": "A"
+\}
+`}
+
+
+
+### Paging Results
+
+This query uses the `limit` keyword to skip the first 5 results and retrieve the next 2:
+
+
+
+{`curl -X POST "http://live-test.ravendb.net/databases/Example/queries"
+-d "\{ \\"Query\\": \\"from index 'Product/Search' limit 5, 2 \\" \}"
+`}
+
+
+
+Response:
+
+
+
+{`HTTP/1.1 200 OK
+Server: nginx
+Date: Thu, 21 Nov 2019 15:25:45 GMT
+Content-Type: application/json; charset=utf-8
+Transfer-Encoding: chunked
+Connection: keep-alive
+Content-Encoding: gzip
+ETag: 7666904607700231125
+Vary: Accept-Encoding
+Raven-Server-Version: 4.2.5.42
+Request-Time: 0
+
+\{
+ "TotalResults": 77,
+ "CappedMaxResults": 2,
+ "SkippedResults": 0,
+ "DurationInMs": 0,
+ "IncludedPaths": null,
+ "IndexName": "Product/Search",
+ "Results": [
+ \{
+ "Name": "Grandma's Boysenberry Spread",
+ "Supplier": "suppliers/3-A",
+ "Category": "categories/2-A",
+ "QuantityPerUnit": "12 - 8 oz jars",
+ "PricePerUnit": 25.0000,
+ "UnitsInStock": 3,
+ "UnitsOnOrder": 120,
+ "Discontinued": false,
+ "ReorderLevel": 25,
+ "@metadata": \{
+ "@collection": "Products",
+ "@change-vector": "A:201-axxGtO/AJUGOLMLrpcu8hA",
+ "@id": "products/6-A",
+ "@index-score": 1,
+ "@last-modified": "2018-07-27T12:11:53.0274169Z"
+ \}
+ \},
+ \{
+ "Name": "Uncle Bob's Organic Dried Pears",
+ "Supplier": "suppliers/3-A",
+ "Category": "categories/7-A",
+ "QuantityPerUnit": "12 - 1 lb pkgs.",
+ "PricePerUnit": 30.0000,
+ "UnitsInStock": 3,
+ "UnitsOnOrder": 15,
+ "Discontinued": false,
+ "ReorderLevel": 10,
+ "@metadata": \{
+ "@collection": "Products",
+ "@change-vector": "A:203-axxGtO/AJUGOLMLrpcu8hA",
+ "@id": "products/7-A",
+ "@index-score": 1,
+ "@last-modified": "2018-07-27T12:11:53.0275119Z"
+ \}
+ \}
+ ],
+ "Includes": \{\},
+ "IndexTimestamp": "2019-11-21T14:55:01.6473995",
+ "LastQueryTime": "2019-11-21T15:25:45.7308416",
+ "IsStale": false,
+ "ResultEtag": 7666904607700231125,
+ "NodeTag": "A"
+\}
+`}
+
+
+
+### Get Timing Details
+
+In this request we see a query on the `Orders` collection, filtered by the values of the fields `Employee` and `Company`
+(incidentally, both point to related documents), and a projection that selects only the `Freight` and `ShipVia` fields
+to be retrieved from the server. Finally, using the same syntax as for related documents shown above, it asks for
+`timings()`.
+
+
+
+{`curl -X POST "http://live-test.ravendb.net/databases/Example/queries?"
+-d "\{\\"Query\\": \\"from Orders
+ where Employee = 'employees/1-A'
+ and Company = 'companies/91-A'
+ select Freight, ShipVia
+ include timings()\\"\}"
+`}
+
+
+
+Response:
+
+
+
+{`HTTP/1.1 200 OK
+Server: nginx
+Date: Thu, 21 Nov 2019 16:58:32 GMT
+Content-Type: application/json; charset=utf-8
+Transfer-Encoding: chunked
+Connection: keep-alive
+Content-Encoding: gzip
+ETag: -1802145387109965474
+Vary: Accept-Encoding
+Raven-Server-Version: 4.2.5.42
+Request-Time: 214
+
+\{
+ "TotalResults": 2,
+ "SkippedResults": 0,
+ "DurationInMs": 213,
+ "IncludedPaths": null,
+ "IndexName": "Auto/Orders/ByCompanyAndEmployee",
+ "Results": [
+ \{
+ "Freight": 3.94,
+ "ShipVia": "shippers/3-A",
+ "@metadata": \{
+ "@projection": true,
+ "@change-vector": "A:45767-axxGtO/AJUGOLMLrpcu8hA, A:1887-0N64iiIdYUKcO+yq1V0cPA, A:6214-xwmnvG1KBkSNXfl7/0yJ1A",
+ "@flags": "HasRevisions",
+ "@id": "orders/127-A",
+ "@index-score": 6.3441801071167,
+ "@last-modified": "2018-07-27T12:11:53.0677162Z"
+ \}
+ \},
+ \{
+ "Freight": 23.79,
+ "ShipVia": "shippers/3-A",
+ "@metadata": \{
+ "@projection": true,
+ "@change-vector": "A:46603-axxGtO/AJUGOLMLrpcu8hA, A:1887-0N64iiIdYUKcO+yq1V0cPA, A:6214-xwmnvG1KBkSNXfl7/0yJ1A",
+ "@flags": "HasRevisions",
+ "@id": "orders/545-A",
+ "@index-score": 6.3441801071167,
+ "@last-modified": "2018-07-27T12:11:53.1390160Z"
+ \}
+ \}
+ ],
+ "Includes": \{\},
+ "IndexTimestamp": "2019-11-21T16:58:32.8180797",
+ "LastQueryTime": "2019-11-21T16:58:32.8179978",
+ "IsStale": false,
+ "ResultEtag": -1802145387109965474,
+ "NodeTag": "A",
+ "Timings": \{
+ "DurationInMs": 213,
+ "Timings": \{
+ "Optimizer": \{
+ "DurationInMs": 46,
+ "Timings": null
+ \},
+ "Query": \{
+ "DurationInMs": 0,
+ "Timings": \{
+ "Lucene": \{
+ "DurationInMs": 0,
+ "Timings": null
+ \},
+ "Retriever": \{
+ "DurationInMs": 0,
+ "Timings": \{
+ "Projection": \{
+ "DurationInMs": 0,
+ "Timings": \{
+ "Storage": \{
+ "DurationInMs": 0,
+ "Timings": null
+ \}
+ \}
+ \}
+ \}
+ \}
+ \}
+ \},
+ "Staleness": \{
+ "DurationInMs": 165,
+ "Timings": null
+ \}
+ \}
+ \}
+\}
+`}
+
+
+
+At the end of the response body above we see the `Timings` object which shows all the stages of the operation listed in
+alphabetical order. In this case there was an `Optimizer` stage, during which a new dynamic index was created to satisfy the
+query. The name of this new index is shown at the top of the body: `Auto/Orders/ByCompanyAndEmployee`. Next came a `Staleness`
+stage during which the indexing itself took place. Lastly came the `Query` stage itself. This included a [Lucene search engine](https://lucene.apache.org/)
+substage and a `Retriever` substage. As you can see, since the index has already done all the work, the query itself takes less
+than a millisecond. From now on, similar queries on this index will also take the server a millisecond or less to complete.
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/rest-api/rest-api-intro.mdx b/versioned_docs/version-7.1/client-api/rest-api/rest-api-intro.mdx
new file mode 100644
index 0000000000..23349ccd7c
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/rest-api/rest-api-intro.mdx
@@ -0,0 +1,167 @@
+---
+title: "Introduction to the REST API"
+hide_table_of_contents: true
+sidebar_label: Introduction to the REST API
+sidebar_position: 0
+---
+
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+# Introduction to the REST API
+
+
+* This page covers some basic information that will help you learn to use the REST API:
+ * How to use the CLI tool *cURL*.
+ * A description of the JSON format for the purposes of writing and parsing it.
+ * Some of the HTTP status codes used in the API.
+
+* To learn more about HTTP and REST in general, try these tutorials:
+ * [HTTP guide for developers by Mozilla](https://developer.mozilla.org/en-US/docs/Web/HTTP)
+ * [REST API Tutorial website](https://www.restapitutorial.com/)
+
+* In this page:
+ * [cURL Basics](../../client-api/rest-api/rest-api-intro.mdx#curl-basics)
+ * [Document Format and Structure](../../client-api/rest-api/rest-api-intro.mdx#document-format-and-structure)
+ * [Using cURL With HTTPS](../../client-api/rest-api/rest-api-intro.mdx#using-curl-with-https)
+ * [Common HTTP Status Codes](../../client-api/rest-api/rest-api-intro.mdx#common-http-status-codes)
+
+
+## cURL Basics
+
+A good way to familiarize yourself with the RavenDB REST API is with the command line tool cURL, which allows you to construct and
+send individual HTTP requests. You can download cURL from [curl.haxx.se](https://curl.haxx.se/download.html) (If you're using Linux
+your CLI may already have cURL installed). You can learn how to use it with the [cURL documentation](https://curl.haxx.se/docs/).
+This page just covers the basics you'll need to interact with RavenDB.
+
+All cURL commands begin with the keyword `curl` and contain the URL of your RavenDB server or one of its endpoints. This command retrieves the first document from
+a database named "Demo" located on our public [playground server](http://live-test.ravendb.net), and prints it in your CLI:
+
+
+
+{`curl http://live-test.ravendb.net/databases/demo/docs?pagesize=1
+`}
+
+
+
+The other parameters of the HTTP request are specified using 'options'. These are the main cURL options that interest us:
+
+| Option | Purpose |
+| - | - |
+| -X | Set the [HTTP method](https://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html) that is sent with the request |
+| -H | Add one or more headers, e.g. to provide extra information about the contents of the request body |
+| -d | This option denotes the beginning of the body of the request. The body itself is wrapped with double quotes `"`. One of the ways to upload a document to the server is to send it in the body. |
+| -T | Set the path to a file you want to upload, such as a document or attachment |
+| --cert | (For https) the path to your certificate file |
+| --key | (For https) the path to your private key file |
+
+This request uploads a document to a database on the playground server from a local file:
+
+
+
+{`curl -X PUT http://live-test.ravendb.net/databases/demo/docs?id=example -T document.txt
+`}
+
+
+[More about how to upload documents](../../client-api/rest-api/document-commands/put-documents.mdx)
+
+
+
+## Document Format and Structure
+
+In RavenDB all documents have a standard [JSON](https://www.json.org/) format. In essence, every JSON object is composed of a series
+of key-value pairs. A document with a complex structure might look something like this:
+
+
+
+{`\{
+ "": ,
+ "": "",
+ "an array": [
+ ,
+ "",
+ ...
+ ],
+ "an object": \{
+ "": ,
+ "": "",
+ ...
+ \},
+ ...
+\}
+`}
+
+
+
+The whole object is wrapped in curly brackets `{}`. The `` is always a string, and the `` can be a string (denoted by
+double quotes), a number, a boolean, or null. The value can also be an array of values wrapped in square brackets `[]`, or it can itself be another JSON object
+wrapped in another pair of curly brackets. Whitespace is completely optional. In the above example and throughout the documentation,
+JSON is broken into multiple lines for the sake of clarity. When using cURL, the entire command including the request body
+needs to be on one line.
+
+
+#### Sending raw JSON using cURL
+Sending raw JSON in the body faces us with a problem: the body itself is wrapped with double quotes `"`,
+so the double quotes within the JSON will be interpreted by the parser as the end of the body. The solution is to escape every double quote
+by putting a backslash `\` before it, like this:
+
+
+
+{`-d "\{
+ \\"a string\\": \\"some text\\",
+ \\"a number\\": 42
+\}"
+`}
+
+
+
+
+#### Binary data
+In addition to JSON, pure binary data can be stored as an [attachment](../../document-extensions/attachments/what-are-attachments.mdx)
+associated with an existing document. Files can be added to the request with the `-T` option. Some types of requests, though, allow you to include raw binary in the body - such as the
+[Put Attachment Command](../../client-api/rest-api/document-commands/batch-commands.mdx#put-attachment-command).
+
+
+
+## Using cURL With HTTPS
+
+HTTPS adds public-key encryption on top of standard HTTP to protect information during transit between client and server. It has
+become increasingly common throughout the internet in recent years. Our [setup wizard](../../start/installation/setup-wizard.mdx) makes
+it very simple to set up server secure using a free [Let's Encrypt](https://letsencrypt.org/) certificate.
+
+To communicate with a secure server over https, you need to specify the paths to the your client certificate and private key
+files with the `--cert` and `--key` options respectively:
+
+
+
+{`curl --cert --key ""
+`}
+
+
+
+These files can be found in the configuration Zip package you recieved at the end of the setup wizard. You can download this Zip package
+again by going to this endpoint: `/admin/debug/cluster-info-package`. The certificate and key are found at
+the root of the package with the names: `admin.client.certificate..crt`, and
+`admin.client.certificate..key` respectively.
+
+
+
+## Common HTTP Status Codes
+
+These are a few of the HTTP status codes we use in our REST API, and what we mean by them:
+
+| Code | [Official IANA description](https://www.iana.org/assignments/http-status-codes/http-status-codes.xhtml) | Purpose |
+| - | - | - |
+| 200 | OK | Indicates that a valid request was received by the server, such as `GET` requests and queries. This includes cases where the response body itself is empty because the query returned 0 results. |
+| 201 | Created | Confirms the success of document `PUT` requests |
+| 304 | Not Modified | When prompted, the server can check if the requested data has been modified since the previous request. If it hasn't, the server responds with this status code to tell the client that it can continue to use the locally cached copy of the data. This is a mechanism we often use to minimize traffic over the network. |
+| 404 | Not Found | Sometimes used to indicate that the request was valid but the requested data could not be found |
+| 409 | Conflict | Indicates that the database has received [conflicting commands](../../server/clustering/replication/replication-conflicts.mdx). This happens in clusters when different nodes receive commands to modify the same data at the same time - before the modification could be passed on to the rest of the cluster. |
+| 500 | Internal Server Error | Used for exceptions that occur on the server side |
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/security/_category_.json b/versioned_docs/version-7.1/client-api/security/_category_.json
new file mode 100644
index 0000000000..e9e2bab6c4
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/security/_category_.json
@@ -0,0 +1,4 @@
+{
+ "position": 14,
+ "label": Security,
+}
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/security/_deserialization-security-csharp.mdx b/versioned_docs/version-7.1/client-api/security/_deserialization-security-csharp.mdx
new file mode 100644
index 0000000000..f3c8912f93
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/security/_deserialization-security-csharp.mdx
@@ -0,0 +1,203 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* Data deserialization can trigger the execution of gadgets that
+ may initiate RCE attacks on the client machine.
+* To handle this threat, RavenDB's default deserializer blocks the
+ deserialization of known [`.NET` RCE gadgets](https://cheatsheetseries.owasp.org/cheatsheets/Deserialization_Cheat_Sheet.html#known-net-rce-gadgets).
+* Users can easily modify the list of namespaces and object types
+ that deserialization is forbidden or allowed for.
+
+* In this page:
+ * [Securing Deserialization](../../client-api/security/deserialization-security.mdx#securing-deserialization)
+ * [Invoking a Gadget](../../client-api/security/deserialization-security.mdx#invoking-a-gadget)
+ * [DefaultRavenSerializationBinder](../../client-api/security/deserialization-security.mdx#defaultravenserializationbinder)
+ * [RegisterForbiddenNamespace](../../client-api/security/deserialization-security.mdx#section)
+ * [RegisterForbiddenType](../../client-api/security/deserialization-security.mdx#section-1)
+ * [RegisterSafeType](../../client-api/security/deserialization-security.mdx#section-2)
+ * [Example](../../client-api/security/deserialization-security.mdx#example)
+
+
+## Securing Deserialization
+
+* When a RavenDB client uses the [Newtonsoft library](https://www.newtonsoft.com/json/help/html/SerializingJSON.htm)
+ to deserialize a JSON string to a `.NET` object, the object may include
+ a reference to a **gadget** (a code segment) and the deserialization
+ process may execute this gadget.
+* Some gadgets attempt to exploit the deserialization process and initiate
+ an RCE (Remote Code Execution) attack that may, for example, inject the
+ system with malicious code. RCE attacks may sabotage the system, gain
+ control over it, steal information, and so on.
+* To prevent such exploitation, RavenDB's default deserializer
+ blocks deserialization for suspicious namespaces and
+ [known `.NET` RCE gadgets](https://cheatsheetseries.owasp.org/cheatsheets/Deserialization_Cheat_Sheet.html#known-net-rce-gadgets):
+ `System.Configuration.Install.AssemblyInstaller`
+ `System.Activities.Presentation.WorkflowDesigner`
+ `System.Windows.ResourceDictionary`
+ `System.Windows.Data.ObjectDataProvider`
+ `System.Windows.Forms.BindingSource`
+ `Microsoft.Exchange.Management.SystemManager.WinForms.ExchangeSettingsProvider`
+ `System.Data.DataViewManager, System.Xml.XmlDocument/XmlDataDocument`
+ `System.Management.Automation.PSObject`
+
+* Users can easily [modify](../../client-api/security/deserialization-security.mdx#defaultravenserializationbinder)
+ the list of namespaces and object types for which deserialization is forbidden
+ or allowed.
+
+
+
+## Invoking a Gadget
+
+* **Directly-loaded gadgets Cannot be blocked using the default binder**.
+ When a gadget is loaded directly its loading and execution during
+ deserialization is **permitted** regardless of the content of the
+ default deserializer list.
+
+ E.g., the following segment will be executed,
+
+
+{`// The object will be allowed to be deserialized
+// regardless of the default binder list.
+session.Load
+
+
+* **Indirectly-loaded gadgets Can be blocked using the default binder**.
+ When a gadget is loaded indirectly its loading and execution during
+ deserialization **can be blocked** using the default deserializer list.
+
+ E.g., in the following sample, taken [from here](https://book.hacktricks.xyz/pentesting-web/deserialization/basic-.net-deserialization-objectdataprovider-gadgets-expandedwrapper-and-json.net#abusing-json.net),
+ a gadget is loaded indirectly: its name is included as a value
+ and will only take its place and be used to execute the gadget
+ during deserialization.
+ Including this type in the default deserialization list will
+ prevent the gadget's deserialization and execution.
+
+
+{`string userdata = @"\{
+ '$type':'System.Windows.Data.ObjectDataProvider, PresentationFramework, Version=4.0.0.0,
+ Culture=neutral, PublicKeyToken=31bf3856ad364e35',
+ 'MethodName':'Start',
+ 'MethodParameters':\{
+ '$type':'System.Collections.ArrayList, mscorlib, Version=4.0.0.0,
+ Culture=neutral, PublicKeyToken=b77a5c561934e089',
+ '$values':['cmd', '/c calc.exe']
+ \},
+ 'ObjectInstance':\{'$type':'System.Diagnostics.Process, System, Version=4.0.0.0,
+ Culture=neutral, PublicKeyToken=b77a5c561934e089'\}
+\}";
+`}
+
+
+
+
+
+
+## `DefaultRavenSerializationBinder`
+
+Use the `DefaultRavenSerializationBinder` convention and its methods to
+block the deserialization of suspicious namespaces and object types or
+allow the deserialization of trusted object types.
+
+Define a `DefaultRavenSerializationBinder` instance, use the dedicated
+methods to forbid or allow the deserialization of entities, and register
+the defined instance as a serialization convention as shown
+[below](../../client-api/security/deserialization-security.mdx#example).
+
+
+Be sure to update the default deserializer list **before** the initialization
+of the document that you want the list to apply to.
+
+### `RegisterForbiddenNamespace`
+Use `RegisterForbiddenNamespace` to prevent the deserialization of objects loaded from a given namespace.
+
+
+
+{`public void RegisterForbiddenNamespace(string @namespace)
+`}
+
+
+
+ | Parameter | Type | Description |
+ |:-------------:|:-------------:|-------------|
+ | **@namespace** | `string` | The name of a namespace from which deserialization won't be allowed. |
+
+
+ Attempting to deserialize a forbidden namespace will throw an
+ `InvalidOperationException` exception with the following details:
+ _"Cannot resolve type" + `type.FullName` + "because the namespace is on a blacklist due to
+ security reasons. Please customize json deserializer in the conventions and override SerializationBinder
+ with your own logic if you want to allow this type."_
+
+### `RegisterForbiddenType`
+Use `RegisterForbiddenType` to prevent the deserialization of a given object type.
+
+
+
+{`public void RegisterForbiddenType(Type type)
+`}
+
+
+
+ | Parameter | Type | Description |
+ |:-------------:|:-------------:|-------------|
+ | **type** | `Type` | An object type whose deserialization won't be allowed. |
+
+
+ Attempting to deserialize a forbidden object type will throw an
+ `InvalidOperationException` exception with the following details:
+ _"Cannot resolve type" + `type.FullName` + "because the type is on a blacklist due to
+ security reasons.
+ Please customize json deserializer in the conventions and override SerializationBinder
+ with your own logic if you want to allow this type."_
+
+### `RegisterSafeType`
+Use `RegisterSafeType` to **allow** the deserialization of a given object type.
+
+
+
+{`public void RegisterSafeType(Type type)
+`}
+
+
+
+ | Parameter | Type | Description |
+ |:-------------:|:-------------:|-------------|
+ | **type** | `Type` | An object type whose deserialization **will** be allowed. |
+
+## Example
+
+
+
+{`// Create a default serialization binder
+var binder = new DefaultRavenSerializationBinder();
+// Register a forbidden namespace
+binder.RegisterForbiddenNamespace("SuspiciousNamespace");
+// Register a forbidden object type
+binder.RegisterForbiddenType(suspiciousObject.GetType());
+// Register a trusted object type
+binder.RegisterSafeType(trustedObject.GetType());
+
+var store = new DocumentStore()
+\{
+ Conventions =
+ \{
+ Serialization = new NewtonsoftJsonSerializationConventions
+ \{
+ // Customize store deserialization using the defined binder
+ CustomizeJsonDeserializer = deserializer => deserializer.SerializationBinder = binder
+ \}
+ \}
+\};
+`}
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/security/deserialization-security.mdx b/versioned_docs/version-7.1/client-api/security/deserialization-security.mdx
new file mode 100644
index 0000000000..59f18ec740
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/security/deserialization-security.mdx
@@ -0,0 +1,29 @@
+---
+title: "Security: Deserialization"
+hide_table_of_contents: true
+sidebar_label: Deserialization Security
+sidebar_position: 0
+---
+
+import LanguageSwitcher from "@site/src/components/LanguageSwitcher";
+import LanguageContent from "@site/src/components/LanguageContent";
+
+import DeserializationSecurityCsharp from './_deserialization-security-csharp.mdx';
+
+export const supportedLanguages = ["csharp"];
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/session/_category_.json b/versioned_docs/version-7.1/client-api/session/_category_.json
new file mode 100644
index 0000000000..25b2102722
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/session/_category_.json
@@ -0,0 +1,4 @@
+{
+ "position": 6,
+ "label": Session,
+}
\ No newline at end of file
diff --git a/versioned_docs/version-7.1/client-api/session/_deleting-entities-csharp.mdx b/versioned_docs/version-7.1/client-api/session/_deleting-entities-csharp.mdx
new file mode 100644
index 0000000000..dd2a653dd0
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/session/_deleting-entities-csharp.mdx
@@ -0,0 +1,105 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+Entities can be marked for deletion by using the `Delete` method, but will not be removed from the server until `SaveChanges` is called.
+
+## Syntax
+
+
+
+{`void Delete(T entity);
+
+void Delete(string id);
+
+void Delete(string id, string expectedChangeVector);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **entity** | `T` | instance of the entity to delete |
+| **id** | `string` | ID of the entity to delete |
+| **expectedChangeVector** | `string` | a change vector to use for concurrency checks |
+
+## Example I
+
+
+
+
+{`Employee employee = session.Load("employees/1");
+
+session.Delete(employee);
+session.SaveChanges();
+`}
+
+
+
+
+{`Employee employee = await session.LoadAsync("employees/1");
+
+session.Delete(employee);
+await session.SaveChangesAsync();
+`}
+
+
+
+
+
+If UseOptimisticConcurrency is set to 'true' (default 'false'), the Delete() method will use loaded 'employees/1' change vector for concurrency check and might throw ConcurrencyException.
+
+
+## Example II
+
+
+
+
+{`session.Delete("employees/1");
+session.SaveChanges();
+`}
+
+
+
+
+{`session.Delete("employees/1");
+await session.SaveChangesAsync();
+`}
+
+
+
+
+
+In this overload, the Delete() method will not do any change vector based concurrency checks because the change vector for 'employees/1' is unknown.
+
+
+
+
+If entity is **not** tracked by session, then executing:
+
+
+
+{`session.Delete("employees/1");
+`}
+
+
+
+is equal to doing:
+
+
+
+{`session.Advanced.Defer(new DeleteCommandData("employees/1", changeVector: null));
+`}
+
+
+
+
+In this sample the change vector is null - this means that there will be no concurrency checks. A non-null and valid change vector value will trigger a concurrency check.
+
+
+You can read more about defer operations [here](../../client-api/session/how-to/defer-operations.mdx).
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/session/_deleting-entities-java.mdx b/versioned_docs/version-7.1/client-api/session/_deleting-entities-java.mdx
new file mode 100644
index 0000000000..4db1c60908
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/session/_deleting-entities-java.mdx
@@ -0,0 +1,85 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+Entities can be marked for deletion by using the `delete` method, but will not be removed from the server until `saveChanges` is called.
+
+## Syntax
+
+
+
+{` void delete(T entity);
+
+void delete(String id);
+
+void delete(String id, String expectedChangeVector);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **entity** | `T` | instance of the entity to delete |
+| **id** | `String` | ID of the entity to delete |
+| **expectedChangeVector** | `String` | a change vector to use for concurrency checks |
+
+## Example I
+
+
+
+{`Employee employee = session.load(Employee.class, "employees/1");
+
+session.delete(employee);
+session.saveChanges();
+`}
+
+
+
+
+If useOptimisticConcurrency is set to 'true' (default 'false'), the delete() method will use loaded 'employees/1' change vector for concurrency check and might throw ConcurrencyException.
+
+
+## Example II
+
+
+
+{`session.delete("employees/1");
+session.saveChanges();
+`}
+
+
+
+
+In this overload, the delete() method will not do any change vector based concurrency checks because the change vector for 'employees/1' is unknown.
+
+
+
+
+If entity is **not** tracked by session, then executing
+
+
+
+{`session.delete("employees/1");
+`}
+
+
+
+is equal to doing
+
+
+
+{`session.advanced().defer(new DeleteCommandData("employees/1", null));
+`}
+
+
+
+
+In this sample the change vector is null - this means that there will be no concurrency checks. A non-null and valid change vector value will trigger a concurrency check.
+
+
+You can read more about defer operations [here](./how-to/defer-operations).
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/session/_deleting-entities-nodejs.mdx b/versioned_docs/version-7.1/client-api/session/_deleting-entities-nodejs.mdx
new file mode 100644
index 0000000000..29a9b31411
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/session/_deleting-entities-nodejs.mdx
@@ -0,0 +1,85 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+Entities can be marked for deletion by using the `delete()` method, but will *not* be removed from the server until `saveChanges()` is called.
+
+## Syntax
+
+
+
+{`await session.delete(entity);
+
+await session.delete(id);
+
+await session.delete(id, [changeVector]);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **entity** | `object` | Instance of the entity to delete |
+| **id** | `string` | The entity ID |
+| **changeVector** | `string` | a change vector to use for concurrency checks |
+
+## Example I
+
+
+
+{`const employee = await session.load("employees/1");
+
+await session.delete(employee);
+await session.saveChanges();
+`}
+
+
+
+
+If `useOptimisticConcurrency` is set to *true* (default *false*), the `delete()` method will use loaded *employees/1* change vector for concurrency check and might throw `ConcurrencyException`.
+
+
+## Example II
+
+
+
+{`await session.delete("employees/1");
+await session.saveChanges();
+`}
+
+
+
+
+In this example, the `delete()` method will not do any change vector based concurrency checks because the change vector for *employees/1* is unknown.
+
+
+
+
+If entity is **not** tracked by session, then executing
+
+
+
+{`await session.delete("employees/1");
+`}
+
+
+
+is equal to doing
+
+
+
+{`await session.advanced.defer(new DeleteCommandData("employees/1", null));
+`}
+
+
+
+
+In this sample the change vector is null - this means that there will be no concurrency checks. A non-null and valid change vector value will trigger a concurrency check.
+
+
+You can read more about defer operations [here](./how-to/defer-operations).
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/session/_deleting-entities-php.mdx b/versioned_docs/version-7.1/client-api/session/_deleting-entities-php.mdx
new file mode 100644
index 0000000000..6dad4e286a
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/session/_deleting-entities-php.mdx
@@ -0,0 +1,85 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+Entities can be marked for deletion by using the `delete` method, but will not be removed from the server until `saveChanges` is called.
+
+## Syntax
+
+
+
+{`public function delete(?object $entity): void;
+
+public function delete(?string $id): void;
+
+public function delete(?string $id, ?string $expectedChangeVector): void;
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **entity** | `T` | instance of the entity to delete |
+| **id** | `string` | ID of the entity to delete |
+| **expectedChangeVector** | `string` | a change vector to use for concurrency checks |
+
+## Example I
+
+
+
+{`$employee = $session->load(Employee::class, "employees/1");
+
+$session->delete($employee);
+$session->saveChanges();
+`}
+
+
+
+
+If UseOptimisticConcurrency is set to 'true' (default 'false'), the Delete() method will use loaded 'employees/1' change vector for concurrency check and might throw ConcurrencyException.
+
+
+## Example II
+
+
+
+{`$session->delete("employees/1");
+$session->saveChanges();
+`}
+
+
+
+
+In this overload, the Delete() method will not do any change vector based concurrency checks because the change vector for 'employees/1' is unknown.
+
+
+
+
+If entity is **not** tracked by session, then executing:
+
+
+
+{`$session->delete("employees/1");
+`}
+
+
+
+is equal to doing:
+
+
+
+{`$session->advanced()->defer(new DeleteCommandData("employees/1", null));
+`}
+
+
+
+
+In this sample the change vector is null - this means that there will be no concurrency checks. A non-null and valid change vector value will trigger a concurrency check.
+
+
+You can read more about defer operations [here](../../client-api/session/how-to/defer-operations.mdx).
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/session/_deleting-entities-python.mdx b/versioned_docs/version-7.1/client-api/session/_deleting-entities-python.mdx
new file mode 100644
index 0000000000..c326c0c97f
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/session/_deleting-entities-python.mdx
@@ -0,0 +1,81 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+Entities can be marked for deletion by using the `delete()` method, but will not be removed from the server until `save_changes()` is called.
+
+## Syntax
+
+
+
+{`def delete(self, key_or_entity: Union[str, object], expected_change_vector: Optional[str] = None) -> None:
+ ...
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **key_or_entity** | `str` or `object` | ID of the document or instance of the entity to delete |
+| **expected_change_vector** | `str` | a change vector to use for concurrency checks |
+
+## Example I
+
+
+
+{`employee = session.load("employees/1")
+
+session.delete(employee)
+session.save_changes()
+`}
+
+
+
+
+If use_optimistic_concurrency is set to 'True' (default 'False'), the delete() method will use loaded 'employees/1' change vector for concurrency check and might throw ConcurrencyException.
+
+
+## Example II
+
+
+
+{`session.delete("employees/1")
+session.save_changes()
+`}
+
+
+
+
+The delete() method will not do any change vector based concurrency checks because the change vector for 'employees/1' is unknown.
+
+
+
+
+If entity is **not** tracked by session, then executing:
+
+
+
+{`session.delete("employees/1")
+`}
+
+
+
+is equal to doing:
+
+
+
+{`session.advanced.defer(DeleteCommandData("employees/1", change_vector=None))
+`}
+
+
+
+
+In this sample the change vector is None - this means that there will be no concurrency checks. A not-None and valid change vector value will trigger a concurrency check.
+
+
+You can read more about defer operations [here](../../client-api/session/how-to/defer-operations.mdx).
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/session/_loading-entities-csharp.mdx b/versioned_docs/version-7.1/client-api/session/_loading-entities-csharp.mdx
new file mode 100644
index 0000000000..46ada52224
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/session/_loading-entities-csharp.mdx
@@ -0,0 +1,639 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* There are several methods that allow users to load documents from the database and convert them to entities.
+
+* This article covers the following methods:
+
+ - [Load](../../client-api/session/loading-entities.mdx#load)
+ - [Load with Includes](../../client-api/session/loading-entities.mdx#load-with-includes)
+ - [Load - multiple entities](../../client-api/session/loading-entities.mdx#load---multiple-entities)
+ - [LoadStartingWith](../../client-api/session/loading-entities.mdx#loadstartingwith)
+ - [ConditionalLoad](../../client-api/session/loading-entities.mdx#conditionalload)
+ - [Stream](../../client-api/session/loading-entities.mdx#stream)
+ - [IsLoaded](../../client-api/session/loading-entities.mdx#isloaded)
+
+* For loading entities lazily see [perform requests lazily](../../client-api/session/how-to/perform-operations-lazily.mdx).
+
+
+## Load
+
+The most basic way to load a single entity is to use one of the `Load` methods.
+
+
+
+
+{`TResult Load(string id);
+`}
+
+
+
+
+{`Task LoadAsync(string id);
+`}
+
+
+
+
+| Parameter | Type | Description |
+| ------------- | ------------- | ----- |
+| **id** | `string` | Identifier of a document that will be loaded. |
+
+| Return Type | Description |
+| ------------- | ----- |
+| `TResult` | Instance of `TResult` or `null` if a document with a given ID does not exist. |
+
+### Example
+
+
+
+
+{`Employee employee = session.Load("employees/1");
+`}
+
+
+
+
+{`Employee employee = await asyncSession.LoadAsync("employees/1");
+`}
+
+
+
+
+
+From RavenDB version 4.x onwards, only string identifiers are supported. If you are upgrading from 3.x, this is a major change, because in 3.x non-string identifiers are supported.
+
+
+
+
+## Load with Includes
+
+When there is a 'relationship' between documents, those documents can be loaded in a
+single request call using the `Include + Load` methods. Learn more in
+[How To Handle Document Relationships](../../client-api/how-to/handle-document-relationships.mdx).
+
+
+Also see:
+
+* [Including Counters](../../document-extensions/counters/counters-and-other-features.mdx#including-counters)
+* [Including Time Series](../../document-extensions/timeseries/client-api/session/include/overview.mdx)
+* [Including Compare Exchange Values](../../client-api/operations/compare-exchange/include-compare-exchange.mdx)
+* [Including Document Revisions](../../document-extensions/revisions/client-api/session/including.mdx)
+
+
+
+
+{`ILoaderWithInclude
+
+
+| Parameter | Type | Description |
+| ------------- | ------------- | ----- |
+| **path** | `string` or Expression | Path in documents in which the server should look for 'referenced' documents. |
+
+| Return Type | Description |
+| ------------- | ----- |
+| `ILoaderWithInclude` | The `Include` method by itself does not materialize any requests but returns loader containing methods such as `Load`. |
+
+### Example I
+
+We can use this code to also load an employee which made the order.
+
+
+
+
+{`// loading 'products/1'
+// including document found in 'Supplier' property
+Product product = session
+ .Include("Supplier")
+ .Load("products/1");
+
+Supplier supplier = session.Load(product.Supplier); // this will not make server call
+`}
+
+
+
+
+{`// loading 'products/1'
+// including document found in 'Supplier' property
+Product product = await asyncSession
+ .Include("Supplier")
+ .LoadAsync("products/1");
+
+Supplier supplier = await asyncSession.LoadAsync(product.Supplier); // this will not make server call
+`}
+
+
+
+
+### Example II
+
+
+
+
+{`// loading 'products/1'
+// including document found in 'Supplier' property
+Product product = session
+ .Include(x => x.Supplier)
+ .Load("products/1");
+
+Supplier supplier = session.Load(product.Supplier); // this will not make server call
+`}
+
+
+
+
+{`// loading 'products/1'
+// including document found in 'Supplier' property
+Product product = await asyncSession
+ .Include(x => x.Supplier)
+ .LoadAsync("products/1");
+
+Supplier supplier = await asyncSession.LoadAsync(product.Supplier); // this will not make server call
+`}
+
+
+
+
+
+
+## Load - multiple entities
+
+To load multiple entities at once, use one of the following `Load` overloads.
+
+
+
+
+{`Dictionary Load(IEnumerable ids);
+`}
+
+
+
+
+{`Task> LoadAsync(IEnumerable ids);
+`}
+
+
+
+
+| Parameter | Type | Description |
+| ------------- | ------------- | ----- |
+| **ids** | `IEnumerable` | Multiple document identifiers to load |
+
+| Return Type | Description |
+| ------------- | ----- |
+| `Dictionary` | Instance of Dictionary which maps document identifiers to `TResult` or `null` if a document with given ID doesn't exist. |
+
+
+
+
+{`Dictionary employees = session.Load(new[]
+{
+ "employees/1",
+ "employees/2",
+ "employees/3"
+});
+`}
+
+
+
+
+{`Dictionary employees = await asyncSession.LoadAsync(new[]
+{
+ "employees/1",
+ "employees/2",
+});
+`}
+
+
+
+
+
+
+## LoadStartingWith
+
+To load multiple entities that contain a common prefix, use the `LoadStartingWith` method from the `Advanced` session operations.
+
+
+
+
+{`T[] LoadStartingWith(
+ string idPrefix,
+ string matches = null,
+ int start = 0,
+ int pageSize = 25,
+ string exclude = null,
+ string startAfter = null);
+
+void LoadStartingWithIntoStream(
+ string idPrefix,
+ Stream output,
+ string matches = null,
+ int start = 0,
+ int pageSize = 25,
+ string exclude = null,
+ string startAfter = null);
+`}
+
+
+
+
+{`Task LoadStartingWithAsync(
+ string idPrefix,
+ string matches = null,
+ int start = 0,
+ int pageSize = 25,
+ string exclude = null,
+ string startAfter = null);
+
+Task LoadStartingWithIntoStreamAsync(
+ string idPrefix,
+ Stream output,
+ string matches = null,
+ int start = 0,
+ int pageSize = 25,
+ string exclude = null,
+ string startAfter = null);
+`}
+
+
+
+
+| Parameter | Type | Description |
+| ------------- | ------------- | ----- |
+| **idPrefix** | `string` | prefix for which the documents should be returned |
+| **matches** | `string` | pipe ('|') separated values for which document IDs (after 'idPrefix') should be matched ('?' any single character, '*' any characters) |
+| **start** | `int` | number of documents that should be skipped |
+| **pageSize** | `int` | maximum number of documents that will be retrieved |
+| **exclude** | `string` | pipe ('|') separated values for which document IDs (after 'idPrefix') should **not** be matched ('?' any single character, '*' any characters) |
+| **skipAfter** | `string` | skip document fetching until given ID is found and return documents after that ID (default: `null`) |
+
+| Return Type | Description |
+| ------------- | ----- |
+| `TResult[]` | Array of entities matching given parameters. |
+| `Stream` | Output entities matching given parameters as a stream. |
+
+### Example I
+
+
+
+
+{`// return up to 128 entities with Id that starts with 'employees'
+Employee[] result = session
+ .Advanced
+ .LoadStartingWith("employees", null, 0, 128);
+`}
+
+
+
+
+{`// return up to 128 entities with Id that starts with 'employees'
+Employee[] result = (await asyncSession
+ .Advanced
+ .LoadStartingWithAsync("employees", null, 0, 128))
+ .ToArray();
+`}
+
+
+
+
+### Example II
+
+
+
+
+{`// return up to 128 entities with Id that starts with 'employees/'
+// and rest of the key begins with "1" or "2" e.g. employees/10, employees/25
+Employee[] result = session
+ .Advanced
+ .LoadStartingWith("employees/", "1*|2*", 0, 128);
+`}
+
+
+
+
+{`// return up to 128 entities with Id that starts with 'employees/'
+// and rest of the key begins with "1" or "2" e.g. employees/10, employees/25
+Employee[] result = (await asyncSession
+ .Advanced
+ .LoadStartingWithAsync("employees/", "1*|2*", 0, 128))
+ .ToArray();
+`}
+
+
+
+
+
+
+## ConditionalLoad
+
+This method can be used to check whether a document has been modified
+since the last time its change vector was recorded, so that the cost of loading it
+can be saved if it has not been modified.
+
+The `ConditionalLoad` method takes a document's [change vector](../../server/clustering/replication/change-vector.mdx).
+If the entity is tracked by the session, this method returns the entity. If the entity
+is not tracked, it checks if the provided change vector matches the document's
+current change vector on the server side. If they match, the entity is not loaded.
+If the change vectors _do not_ match, the document is loaded.
+
+The method is accessible from the `session.Advanced` operations.
+
+
+
+
+{`(T Entity, string ChangeVector) ConditionalLoad(string id, string changeVector);
+`}
+
+
+
+
+{`Task<(T Entity, string ChangeVector)> ConditionalLoadAsync(string id, string changeVector);
+`}
+
+
+
+
+| Parameter | Type | Description |
+| ------------- | ------------- | ----- |
+| **id** | `string` | The identifier of a document to be loaded. |
+| **changeVector** | `string` | The change vector you want to compare with the server-side change vector. If the change vectors match, the document is not loaded. |
+
+| Return Type | Description |
+|--------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| ValueTuple `(T Entity, string ChangeVector)` | If the given change vector and the server side change vector do not match, the method returns the requested entity and its current change vector. If the change vectors match, the method returns `default` as the entity, and the current change vector. If the specified document, the method returns only `default` without a change vector. |
+
+### Example
+
+
+
+
+{`string changeVector;
+User user = new User { Name = "Bob" };
+
+using (var session = store.OpenSession())
+{
+ session.Store(user, "users/1");
+ session.SaveChanges();
+
+ changeVector = session.Advanced.GetChangeVectorFor(user);
+}
+
+// New session which does not track our User entity
+using (var session = store.OpenSession())
+{
+ // The given change vector matches
+ // the server-side change vector
+ // Does not load the document
+ var result1 = session.Advanced
+ .ConditionalLoad("users/1", changeVector);
+
+ // Modify the document
+ user.Name = "Bob Smith";
+ session.Store(user);
+ session.SaveChanges();
+
+ // Change vectors do not match
+ // Loads the document
+ var result2 = session.Advanced
+ .ConditionalLoad("users/1", changeVector);
+}
+`}
+
+
+
+
+{`string changeVector;
+User user = new User { Name = "Bob" };
+
+using (var session = store.OpenAsyncSession())
+{
+ await session.StoreAsync(user, "users/1");
+ await session.SaveChangesAsync();
+
+ changeVector = session.Advanced.GetChangeVectorFor(user);
+}
+
+// New session which does not track our User entity
+using (var session = store.OpenAsyncSession())
+{
+ // The given change vector matches
+ // the server-side change vector
+ // Does not load the document
+ var result1 = await session.Advanced
+ .ConditionalLoadAsync("users/1", changeVector);
+
+ // Modify the document
+ user.Name = "Bob Smith";
+ await session.StoreAsync(user);
+ await session.SaveChangesAsync();
+
+ // Change vectors do not match
+ // Loads the document
+ var result2 = await session.Advanced
+ .ConditionalLoadAsync("users/1", changeVector);
+}
+`}
+
+
+
+
+
+
+## Stream
+
+Entities can be streamed from the server using one of the following `Stream` methods from the `Advanced` session operations.
+
+Streaming query results does not support the [`include` feature](../../client-api/how-to/handle-document-relationships.mdx#includes).
+Learn more in [How to Stream Query Results](../../client-api/session/querying/how-to-stream-query-results.mdx).
+
+
+Entities loaded using `Stream` will be transient (not attached to session).
+
+
+
+
+
+{`IEnumerator> Stream(IQueryable query);
+
+IEnumerator> Stream(IQueryable query, out StreamQueryStatistics streamQueryStats);
+
+IEnumerator> Stream(IDocumentQuery query);
+
+IEnumerator> Stream(IRawDocumentQuery query);
+
+IEnumerator> Stream(IRawDocumentQuery query, out StreamQueryStatistics streamQueryStats);
+
+IEnumerator> Stream(IDocumentQuery query, out StreamQueryStatistics streamQueryStats);
+
+IEnumerator> Stream(string startsWith, string matches = null, int start = 0, int pageSize = int.MaxValue, string startAfter = null);
+`}
+
+
+
+
+{`Task>> StreamAsync(IQueryable query);
+
+Task>> StreamAsync(IQueryable query, out StreamQueryStatistics streamQueryStats);
+
+Task>> StreamAsync(IDocumentQuery query);
+
+Task>> StreamAsync(IRawDocumentQuery query);
+
+Task>> StreamAsync(IRawDocumentQuery query, out StreamQueryStatistics streamQueryStats);
+
+Task>> StreamAsync(IDocumentQuery query, out StreamQueryStatistics streamQueryStats);
+
+Task>> StreamAsync(string startsWith, string matches = null, int start = 0, int pageSize = int.MaxValue, string startAfter = null);
+`}
+
+
+
+
+| Parameter | Type | Description |
+| ------------- | ------------- | ----- |
+| **startsWith** | `string` | prefix for which documents should be streamed |
+| **matches** | `string` | pipe ('|') separated values for which document IDs should be matched ('?' any single character, '*' any characters) |
+| **start** | `int` | number of documents that should be skipped |
+| **pageSize** | `int` | maximum number of documents that will be retrieved |
+| **skipAfter** | `string` | skip document fetching until a given ID is found and returns documents after that ID (default: `null`) |
+| **StreamQueryStats** | `streamQueryStats` (out parameter) | Information about the streaming query (amount of results, which index was queried, etc.) |
+
+| Return Type | Description |
+| ------------- | ----- |
+| `IEnumerator<`[StreamResult](../../glossary/stream-result.mdx)`>` | Enumerator with entities. |
+| `streamQueryStats` (out parameter) | Information about the streaming query (amount of results, which index was queried, etc.) |
+
+
+### Example I
+
+Stream documents for a ID prefix:
+
+
+
+
+{`IEnumerator> enumerator = session
+ .Advanced
+ .Stream("employees/");
+
+while (enumerator.MoveNext())
+{
+ StreamResult employee = enumerator.Current;
+}
+`}
+
+
+
+
+{`IAsyncEnumerator> enumerator = await asyncSession
+ .Advanced
+ .StreamAsync("employees/");
+
+while (await enumerator.MoveNextAsync())
+{
+ StreamResult employee = enumerator.Current;
+}
+`}
+
+
+
+
+## Example 2
+
+Fetch documents for a ID prefix directly into a stream:
+
+
+
+
+{`using (var outputStream = new MemoryStream())
+{
+ session
+ .Advanced
+ .LoadStartingWithIntoStream("employees/", outputStream);
+}
+`}
+
+
+
+
+{`using (var outputStream = new MemoryStream())
+{
+ await asyncSession
+ .Advanced
+ .LoadStartingWithIntoStreamAsync("employees/", outputStream);
+}
+`}
+
+
+
+
+
+
+## IsLoaded
+
+Use the `IsLoaded` method from the `Advanced` session operations
+To check if an entity is attached to a session (e.g. because it's been
+previously loaded).
+
+
+`IsLoaded` checks if an attempt to load a document has been already made
+during the current session, and returns `true` even if such an attemp was
+made and failed.
+If, for example, the `Load` method was used to load `employees/3` during
+this session and failed because the document has been previously deleted,
+`IsLoaded` will still return `true` for `employees/3` for the remainder
+of the session just because of the attempt to load it.
+
+
+
+
+{`bool IsLoaded(string id);
+`}
+
+
+
+| Parameter | Type | Description |
+| ------------- | ------------- | ----- |
+| **id** | `string` | Entity ID for which the check should be performed. |
+
+| Return Type | Description |
+| ------------- | ----- |
+| `bool` | Indicates if an entity with a given ID is loaded. |
+
+### Example
+
+
+
+
+{`bool isLoaded = session.Advanced.IsLoaded("employees/1"); // false
+Employee employee = session.Load("employees/1");
+isLoaded = session.Advanced.IsLoaded("employees/1"); // true
+`}
+
+
+
+
+{`bool isLoaded = asyncSession.Advanced.IsLoaded("employees/1"); // false
+Employee employee = await asyncSession.LoadAsync("employees/1");
+isLoaded = asyncSession.Advanced.IsLoaded("employees/1"); // true
+`}
+
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/session/_loading-entities-java.mdx b/versioned_docs/version-7.1/client-api/session/_loading-entities-java.mdx
new file mode 100644
index 0000000000..9a0e9c176e
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/session/_loading-entities-java.mdx
@@ -0,0 +1,396 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+There are several methods with many overloads that allow users to download documents
+from the database and convert them to entities. This article will cover the following
+methods:
+
+- [Load](../../client-api/session/loading-entities.mdx#load)
+- [Load with Includes](../../client-api/session/loading-entities.mdx#load-with-includes)
+- [Load - multiple entities](../../client-api/session/loading-entities.mdx#load---multiple-entities)
+- [LoadStartingWith](../../client-api/session/loading-entities.mdx#loadstartingwith)
+- [ConditionalLoad](../../client-api/session/loading-entities.mdx#conditionalload)
+- [Stream](../../client-api/session/loading-entities.mdx#stream)
+- [IsLoaded](../../client-api/session/loading-entities.mdx#isloaded)
+
+
+## Load
+
+The most basic way to load a single entity is to use one of the `load` methods.
+
+
+
+{` T load(Class clazz, String id);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **id** | `String` | Identifier of a document that will be loaded. |
+
+| Return Value | |
+| ------------- | ----- |
+| T | Instance of `T` or `null` if a document with a given ID does not exist. |
+
+### Example
+
+
+
+{`Employee employee = session.load(Employee.class, "employees/1");
+`}
+
+
+
+
+From RavenDB version 4.x onwards, only string identifiers are supported. If you are upgrading from 3.x, this is a major change, because in 3.x non-string identifiers are supported.
+
+
+
+
+## Load with Includes
+
+When there is a 'relationship' between documents, those documents can be loaded in a
+single request call using the `include + load` methods. Learn more in
+[How To Handle Document Relationships](../../client-api/how-to/handle-document-relationships.mdx).
+See also [including counters](../../document-extensions/counters/counters-and-other-features.mdx#including-counters)
+and [including time series](../../document-extensions/timeseries/client-api/session/include/overview.mdx).
+
+
+
+{`ILoaderWithInclude include(String path);
+
+ Map load(Class clazz, String... ids);
+
+ Map load(Class clazz, Collection ids);
+
+ TResult load(Class clazz, String id);
+`}
+
+
+
+| Parameter | Type | Description |
+| ------------- | ------------- | ----- |
+| **path** | `String` | Path in documents in which the server should look for 'referenced' documents. |
+| **ids** | `String` | Ids to load. |
+
+| Return Type | Description |
+| ------------- | ----- |
+| `ILoaderWithInclude` | The `include` method by itself does not materialize any requests but returns loader containing methods such as `load`. |
+
+### Example I
+
+We can use this code to also load an employee which made the order.
+
+
+
+{`// loading 'products/1'
+// including document found in 'supplier' property
+Product product = session
+ .include("Supplier")
+ .load(Product.class, "products/1");
+
+Supplier supplier = session.load(Supplier.class, product.getSupplier()); // this will not make server call
+`}
+
+
+
+
+
+## Load - multiple entities
+
+To load multiple entities at once, use one of the following `load` overloads.
+
+
+
+{` Map load(Class clazz, String... ids);
+
+ Map load(Class clazz, Collection ids);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **ids** | Collection<String> or String... | Multiple document identifiers to load |
+
+| Return Value | |
+| ------------- | ----- |
+| Map<String, T> | Instance of Map which maps document identifiers to `T` or `null` if a document with given ID doesn't exist. |
+
+
+
+{`Map employees
+ = session.load(Employee.class,
+ "employees/1", "employees/2", "employees/3");
+`}
+
+
+
+
+
+## LoadStartingWith
+
+To load multiple entities that contain a common prefix, use the `loadStartingWith` method from the `advanced` session operations.
+
+
+
+{` T[] loadStartingWith(Class clazz, String idPrefix);
+
+ T[] loadStartingWith(Class clazz, String idPrefix, String matches);
+
+ T[] loadStartingWith(Class clazz, String idPrefix, String matches, int start);
+
+ T[] loadStartingWith(Class clazz, String idPrefix, String matches, int start, int pageSize);
+
+ T[] loadStartingWith(Class clazz, String idPrefix, String matches, int start, int pageSize, String exclude);
+
+ T[] loadStartingWith(Class clazz, String idPrefix, String matches, int start, int pageSize, String exclude, String startAfter);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **idPrefix** | String | prefix for which the documents should be returned |
+| **matches** | String | pipe ('|') separated values for which document IDs (after 'idPrefix') should be matched ('?' any single character, '*' any characters) |
+| **start** | int | number of documents that should be skipped |
+| **pageSize** | int | maximum number of documents that will be retrieved |
+| **exclude** | String | pipe ('|') separated values for which document IDs (after 'idPrefix') should **not** be matched ('?' any single character, '*' any characters) |
+| **skipAfter** | String | skip document fetching until given ID is found and return documents after that ID (default: `null`) |
+
+| Return Value | |
+| ------------- | ----- |
+| T[] | Array of entities matching given parameters. |
+
+### Example I
+
+
+
+{`// return up to 128 entities with Id that starts with 'employees'
+Employee[] result = session
+ .advanced()
+ .loadStartingWith(Employee.class, "employees/", null, 0, 128);
+`}
+
+
+
+### Example II
+
+
+
+{`// return up to 128 entities with Id that starts with 'employees/'
+// and rest of the key begins with "1" or "2" e.g. employees/10, employees/25
+Employee[] result = session
+ .advanced()
+ .loadStartingWith(Employee.class, "employees/", "1*|2*", 0, 128);
+`}
+
+
+
+
+
+## ConditionalLoad
+
+The `conditionalLoad` method takes a document's [change vector](../../server/clustering/replication/change-vector.mdx).
+If the entity is tracked by the session, this method returns the entity. If the entity
+is not tracked, it checks if the provided change vector matches the document's
+current change vector on the server side. If they match, the entity is not loaded.
+If the change vectors _do not_ match, the document is loaded.
+
+In other words, this method can be used to check whether a document has been modified
+since the last time its change vector was recorded, so that the cost of loading it
+can be saved if it has not been modified.
+
+The method is accessible from the `session.advanced()` operations.
+
+
+
+{` ConditionalLoadResult conditionalLoad(Class clazz, String id, String changeVector);
+`}
+
+
+
+| Parameter | Type | Description |
+| ------------- | ------------- | ----- |
+| **clazz** | `Class` | The class of a document to be loaded. |
+| **id** | `String` | The identifier of a document to be loaded. |
+| **changeVector** | `String` | The change vector you want to compare with the server-side change vector. If the change vectors match, the document is not loaded. |
+
+| Return Type | Description |
+|---------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| ConditionalLoadResult<T> `(Class, String ChangeVector)` | If the given change vector and the server side change vector do not match, the method returns the requested entity and its current change vector. If the change vectors match, the method returns `default` as the entity, and the current change vector. If the specified document, the method returns only `default` without a change vector. |
+
+### Example
+
+
+
+{`try (IDocumentSession session = store.openSession()) \{
+
+ String changeVector;
+ User user = new User("Bob");
+
+ session.store(User.class, "users/1");
+ session.saveChanges();
+
+ changeVector = session.advanced().getChangeVectorFor(user);
+\}
+
+User user = new User("Bob");
+String changeVector = "a";
+
+try (IDocumentSession session = store.openSession()) \{
+ // New session which does not track our User entity
+
+ // The given change vector matches
+ // the server-side change vector
+ // Does not load the document
+ ConditionalLoadResult result1 = session.advanced()
+ .conditionalLoad(User.class, "users/1", changeVector);
+
+ // Modify the document
+ user.setName("Bob Smith");
+ session.store(user);
+ session.saveChanges();
+
+ // Change vectors do not match
+ // Loads the document
+ ConditionalLoadResult result2 = session.advanced()
+ .conditionalLoad(User.class, "users/1", changeVector);
+\}
+`}
+
+
+
+
+
+## Stream
+
+Entities can be streamed from the server using one of the following `stream` methods from the `advanced` session operations.
+
+
+
+{` CloseableIterator> stream(IDocumentQuery query);
+
+ CloseableIterator> stream(IDocumentQuery query, Reference streamQueryStats);
+
+ CloseableIterator> stream(IRawDocumentQuery query);
+
+ CloseableIterator> stream(IRawDocumentQuery query, Reference streamQueryStats);
+
+ CloseableIterator> stream(Class clazz, String startsWith);
+
+ CloseableIterator> stream(Class clazz, String startsWith, String matches);
+
+ CloseableIterator> stream(Class clazz, String startsWith, String matches, int start);
+
+ CloseableIterator> stream(Class clazz, String startsWith, String matches, int start, int pageSize);
+
+ CloseableIterator> stream(Class clazz, String startsWith, String matches, int start, int pageSize, String startAfter);
+`}
+
+
+
+| Parameter | Type | Description |
+| ------------- | ------------- | ----- |
+| **startsWith** | `String` | prefix for which documents should be streamed |
+| **matches** | `String` | pipe ('|') separated values for which document IDs should be matched ('?' any single character, '*' any characters) |
+| **start** | `int` | number of documents that should be skipped |
+| **pageSize** | `int` | maximum number of documents that will be retrieved |
+| **skipAfter** | `String` | skip document fetching until a given ID is found and returns documents after that ID (default: `null`) |
+| **streamQueryStats** | `Reference streamQueryStats (out parameter)` | Information about the streaming query (amount of results, which index was queried, etc.) |
+
+| Return Value | |
+| ------------- | ----- |
+| CloseableIterator<StreamResult<T>> | Iterator with entities. |
+| streamQueryStats (out parameter) | Information about the streaming query (amount of results, which index was queried, etc.) |
+
+
+### Example I
+
+Stream documents for a ID prefix:
+
+
+
+{`try (CloseableIterator> iterator =
+ session.advanced().stream(Employee.class, "employees/")) \{
+ while (iterator.hasNext()) \{
+ StreamResult employee = iterator.next();
+ \}
+\}
+`}
+
+
+
+## Example 2
+
+Fetch documents for a ID prefix directly into a stream:
+
+
+
+{`ByteArrayOutputStream baos = new ByteArrayOutputStream();
+session
+ .advanced()
+ .loadStartingWithIntoStream("employees/", baos);
+`}
+
+
+
+### Remarks
+
+
+Entities loaded using `stream` will be transient (not attached to session).
+
+
+
+
+## IsLoaded
+
+Use the `isLoaded` method from the `advanced` session operations
+To check if an entity is attached to a session (e.g. because it's been
+previously loaded).
+
+
+`isLoaded` checks if an attempt to load a document has been already made
+during the current session, and returns `true` even if such an attemp was
+made and failed.
+If, for example, the `load` method was used to load `employees/3` during
+this session and failed because the document has been previously deleted,
+`isLoaded` will still return `true` for `employees/3` for the remainder
+of the session just because of the attempt to load it.
+
+
+
+
+{`boolean isLoaded(String id);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **id** | `String` | Entity ID for which the check should be performed. |
+
+| Return Value | |
+| ------------- | ----- |
+| boolean | Indicates if an entity with a given ID is loaded. |
+
+### Example
+
+
+
+{`boolean isLoaded = session.advanced().isLoaded("employees/1");//false
+Employee employee = session.load(Employee.class, "employees/1");
+isLoaded = session.advanced().isLoaded("employees/1"); // true
+`}
+
+
+
+
+
+
+
diff --git a/versioned_docs/version-7.1/client-api/session/_loading-entities-nodejs.mdx b/versioned_docs/version-7.1/client-api/session/_loading-entities-nodejs.mdx
new file mode 100644
index 0000000000..c5c594a5c1
--- /dev/null
+++ b/versioned_docs/version-7.1/client-api/session/_loading-entities-nodejs.mdx
@@ -0,0 +1,416 @@
+import Admonition from '@theme/Admonition';
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+
+
+* There are several methods that allow users to load documents from the database and convert them to entities.
+
+* This article covers the following methods:
+
+ - [Load](../../client-api/session/loading-entities.mdx#load)
+ - [Load with Includes](../../client-api/session/loading-entities.mdx#load-with-includes)
+ - [Load - multiple entities](../../client-api/session/loading-entities.mdx#load---multiple-entities)
+ - [LoadStartingWith](../../client-api/session/loading-entities.mdx#loadstartingwith)
+ - [ConditionalLoad](../../client-api/session/loading-entities.mdx#conditionalload)
+ - [IsLoaded](../../client-api/session/loading-entities.mdx#isloaded)
+ - [Stream](../../client-api/session/loading-entities.mdx#stream)
+
+* For loading entities lazily see [perform requests lazily](../../client-api/session/how-to/perform-operations-lazily.mdx).
+
+
+## Load
+
+The most basic way to load a single entity is to use session's `load()` method.
+
+
+
+{`await session.load(id, [documentType]);
+`}
+
+
+
+| Parameters | | |
+| ------------- | ------------- | ----- |
+| **id** | string | Identifier of a document that will be loaded. |
+| **documentType** | function | A class constructor used for reviving the results' entities |
+
+| Return Value | |
+| ------------- | ----- |
+| `Promise