diff --git a/docs/documentation/reference/bpmn20/tasks/task-markers.md b/docs/documentation/reference/bpmn20/tasks/task-markers.md index dd46f87..5b22e92 100644 --- a/docs/documentation/reference/bpmn20/tasks/task-markers.md +++ b/docs/documentation/reference/bpmn20/tasks/task-markers.md @@ -79,7 +79,7 @@ Expressions that resolve to a positive number are also possible: Another way to define the number of instances is to specify the name of a process variable which is a collection using the `loopDataInputRef` child element. For each item in the collection, an instance will be created. Optionally, it is possible to set that specific item of the collection for the instance using the inputDataItem child element. This is shown in the following XML example: ```xml - + assigneeList diff --git a/docs/documentation/reference/deployment-descriptors/bpm-platform-xml.md b/docs/documentation/reference/deployment-descriptors/bpm-platform-xml.md index bfb0ed3..4944180 100644 --- a/docs/documentation/reference/deployment-descriptors/bpm-platform-xml.md +++ b/docs/documentation/reference/deployment-descriptors/bpm-platform-xml.md @@ -80,7 +80,7 @@ The namespace for the `bpm-platform.xml` file is `http://www.operaton.org/schema <process-engine> <bpm-platform> false - See process-engine Reference + See process-engine Reference diff --git a/docs/documentation/reference/deployment-descriptors/processes-xml.md b/docs/documentation/reference/deployment-descriptors/processes-xml.md index a8fd7c9..dc352a6 100644 --- a/docs/documentation/reference/deployment-descriptors/processes-xml.md +++ b/docs/documentation/reference/deployment-descriptors/processes-xml.md @@ -65,7 +65,7 @@ The `processes.xml` may be left blank (can be empty). In this case, default valu <process-engine> <process-application> false - See process-engine Reference + See process-engine Reference <process-archive> diff --git a/docs/documentation/reference/deployment-descriptors/tags/process-engine.mdx b/docs/documentation/reference/deployment-descriptors/tags/process-engine.mdx index 1a7bf50..d03db0f 100644 --- a/docs/documentation/reference/deployment-descriptors/tags/process-engine.mdx +++ b/docs/documentation/reference/deployment-descriptors/tags/process-engine.mdx @@ -297,7 +297,7 @@ The following is a list with the most commonly used process engine configuration - disabledPermissions + disabledPermissions List Define a list of Permissions' names. These permissions will be not taken into account whenever authorization check is performed. @@ -483,7 +483,7 @@ The following is a list with the most commonly used process engine configuration Controls if and when the removal time of an historic instance is set. The default value is end.
- Please also see the historyCleanupStrategy + Please also see the historyCleanupStrategy configuration parameter.

Values: start, end, none (String). @@ -559,7 +559,7 @@ The following is a list with the most commonly used process engine configuration - jdbcBatchProcessing + jdbcBatchProcessing Boolean Controls if the engine executes the jdbc statements as Batch or not. @@ -809,7 +809,7 @@ The following is a list with the most commonly used process engine configuration - javaSerializationFormatEnabled + javaSerializationFormatEnabled Boolean Sets if Java serialization format can be used, when setting variables by their serialized representation. Default value: false @@ -817,7 +817,7 @@ The following is a list with the most commonly used process engine configuration - deserializationTypeValidationEnabled + deserializationTypeValidationEnabled Boolean Sets if validation of types should be performed before JSON and XML deserialization. See Security Instructions for further information. Default value: false @@ -841,7 +841,7 @@ The following is a list with the most commonly used process engine configuration - enablePasswordPolicy + enablePasswordPolicy Boolean Set to true, to enable a password policy for users that are managed by the engine. If a custom password policy is configured, it will be enabled. Otherwise the built-in password policy is activated. @@ -849,7 +849,7 @@ The following is a list with the most commonly used process engine configuration - enableCmdExceptionLogging + enableCmdExceptionLogging Boolean Set to false, to disable logging of unhandled exceptions that occur during command execution. The default setting for this flag is true. Note: There might be duplicate log entries for command exceptions (e.g. when a job fails). @@ -857,7 +857,7 @@ The following is a list with the most commonly used process engine configuration - enableReducedJobExceptionLogging + enableReducedJobExceptionLogging Boolean Set to true, to suppress logging of exceptions that occur during the execution of a job that has retries left. If the job does not have any retries left an exception will still be logged. @@ -865,7 +865,7 @@ The following is a list with the most commonly used process engine configuration - webappsAuthenticationLoggingEnabled + webappsAuthenticationLoggingEnabled Boolean Set to true to enable authentication logging in the Operaton web apps (Cockpit, Tasklist, and Admin). When enabled, the Operaton web apps will produce log statements in the application log for each user initiated log in and log out event. The name of the logger is org.operaton.bpm.webapp. @@ -1085,7 +1085,7 @@ The following is a list with the most commonly used process engine configuration Controls which History cleanup strategy is used. The default value is removalTimeBased.
- Please also see the historyRemovalTimeStrategy configuration parameter.

+ Please also see the historyRemovalTimeStrategy configuration parameter.

Values: removalTimeBased, endTimeBased. diff --git a/docs/documentation/user-guide/dmn-engine/expressions-and-scripts.md b/docs/documentation/user-guide/dmn-engine/expressions-and-scripts.md index 4bb895c..c62b3a7 100644 --- a/docs/documentation/user-guide/dmn-engine/expressions-and-scripts.md +++ b/docs/documentation/user-guide/dmn-engine/expressions-and-scripts.md @@ -134,7 +134,7 @@ DMN engine are as follows: :::note[Legacy Behavior] You can find how to go back to the legacy behavior, where `JUEL` was used for input expressions, -output entries and literal expressions [here](../../reference/deployment-descriptors/tags/process-engine.md#dmnFeelEnableLegacyBehavior). +output entries and literal expressions [here](../../reference/deployment-descriptors/tags/process-engine.mdx#dmnFeelEnableLegacyBehavior). ::: The default language can be changed by setting it directly in the DMN 1.3 XML as global expression language with the `expressionLanguage` attribute of diff --git a/docs/documentation/user-guide/dmn-engine/feel/custom-functions.md b/docs/documentation/user-guide/dmn-engine/feel/custom-functions.md index 9e57742..75ea18a 100644 --- a/docs/documentation/user-guide/dmn-engine/feel/custom-functions.md +++ b/docs/documentation/user-guide/dmn-engine/feel/custom-functions.md @@ -121,5 +121,5 @@ the types listed in the [FEEL Data Types] documentation can be returned by a Cus [FEEL Type Handling]: ../../../user-guide/dmn-engine/feel/type-handling.md#return-types [FEEL Data Types]: https://camunda.github.io/feel-scala/1.11/feel-data-types [Process Engine Plugin]: ../../../user-guide/process-engine/process-engine-plugins.md -[dmnFeelCustomFunctionProviders]: ../../../reference/deployment-descriptors/tags/process-engine.md#dmnFeelCustomFunctionProviders +[dmnFeelCustomFunctionProviders]: ../../../reference/deployment-descriptors/tags/process-engine.mdx#dmnFeelCustomFunctionProviders [Register Custom Function Providers]: #register-custom-function-providers diff --git a/docs/documentation/user-guide/dmn-engine/feel/legacy-behavior.md b/docs/documentation/user-guide/dmn-engine/feel/legacy-behavior.md index e7ea40c..edad7cb 100644 --- a/docs/documentation/user-guide/dmn-engine/feel/legacy-behavior.md +++ b/docs/documentation/user-guide/dmn-engine/feel/legacy-behavior.md @@ -31,6 +31,6 @@ By using the legacy FEEL Engine, the Operaton DMN Engine **only** supports `FEEL simple unary tests. ::: -[legacy behavior flag]: ../../../reference/deployment-descriptors/tags/process-engine.md#dmnFeelEnableLegacyBehavior +[legacy behavior flag]: ../../../reference/deployment-descriptors/tags/process-engine.mdx#dmnFeelEnableLegacyBehavior [fluent feel flag setter]: https://docs.operaton.org/reference/latest/javadoc/org/operaton/bpm/dmn/engine/impl/DefaultDmnEngineConfiguration.html#enableFeelLegacyBehavior [feel flag setter](https://docs.operaton.org/reference/latest/javadoc/org/operaton/bpm/dmn/engine/impl/DefaultDmnEngineConfiguration.html#setEnableFeelLegacyBehavior) diff --git a/docs/documentation/user-guide/logging.md b/docs/documentation/user-guide/logging.md index 15b3c85..7775e83 100644 --- a/docs/documentation/user-guide/logging.md +++ b/docs/documentation/user-guide/logging.md @@ -101,7 +101,7 @@ In case of arising exceptions upon execution, the data is kept in the MDC until i.e. the [JobExecutor](../user-guide/process-engine/the-job-executor.md) or the surrounding command, finished its logging. The keys at which the properties are accessible in the MDC can be defined in the -[process engine configuration](../reference/deployment-descriptors/tags/process-engine.md#logging-context-parameters). +[process engine configuration](../reference/deployment-descriptors/tags/process-engine.mdx#logging-context-parameters). In order to access the MDC data, you need to adjust the logging pattern of your logging configuration. An example using Logback could look as follows @@ -176,7 +176,7 @@ The process engine logs on the following categories command context logs including executing atomic operations and bpmn stack traces during exceptions
You can override the default DEBUG log level for bpmn stack traces, see the - Logging level parameters section. + Logging level parameters section. diff --git a/docs/documentation/user-guide/operaton-bpm-run.md b/docs/documentation/user-guide/operaton-bpm-run.md index 4ab61be..3e110d8 100644 --- a/docs/documentation/user-guide/operaton-bpm-run.md +++ b/docs/documentation/user-guide/operaton-bpm-run.md @@ -177,7 +177,7 @@ Just like all the other distros, you can tailor Operaton Run to your needs. To d :::note[Note:] Operaton Run is based on the [Operaton Spring Boot Starter](https://github.com/operaton/operaton/tree/master/spring-boot-starter). -All [configuration properties](spring-boot-integration/configuration.md#operaton-engine-properties) from the operaton-spring-boot-starter are available to customize Operaton Run. +All [configuration properties](spring-boot-integration/configuration.mdx#operaton-engine-properties) from the operaton-spring-boot-starter are available to customize Operaton Run. ::: ### Database diff --git a/docs/documentation/user-guide/process-engine/authorization-service.md b/docs/documentation/user-guide/process-engine/authorization-service.md index 3e292fa..11ca2f5 100644 --- a/docs/documentation/user-guide/process-engine/authorization-service.md +++ b/docs/documentation/user-guide/process-engine/authorization-service.md @@ -1037,5 +1037,5 @@ On these databases, revoke authorizations are effectively unusable. Also see the [Configuration Options](#check-revoke-authorizations) section on this page. -[hist-inst-perm-config-flag]: ../../reference/deployment-descriptors/tags/process-engine.md#enable-historic-instance-permissions +[hist-inst-perm-config-flag]: ../../reference/deployment-descriptors/tags/process-engine.mdx#enable-historic-instance-permissions [Removal-Time-based History Cleanup Strategy]: ../process-engine/history/history-cleanup.md#removal-time-based-strategy diff --git a/docs/documentation/user-guide/process-engine/batch.md b/docs/documentation/user-guide/process-engine/batch.md index 0aa761a..03befe7 100644 --- a/docs/documentation/user-guide/process-engine/batch.md +++ b/docs/documentation/user-guide/process-engine/batch.md @@ -329,7 +329,7 @@ You can configure the property in three ways: [job prioritization]: ../process-engine/the-job-executor.md#job-prioritization [job-definition-priority]: ../process-engine/the-job-executor.md#override-priority-by-job-definition [job-priority]: ../process-engine/the-job-executor.md#set-job-priorities-via-managementservice-api -[invoc-per-batch-job-batch-type]: ../../reference/deployment-descriptors/tags/process-engine.md#invocations-per-batch-job-by-batch-type +[invoc-per-batch-job-batch-type]: ../../reference/deployment-descriptors/tags/process-engine.mdx#invocations-per-batch-job-by-batch-type [Process Engine Plugin]: ../process-engine/process-engine-plugins.md [spring-xml-config]: ../spring-framework-integration/configuration.md -[spring-boot-config]: ../spring-boot-integration/configuration.md#operaton-engine-properties \ No newline at end of file +[spring-boot-config]: ../spring-boot-integration/configuration.mdx#operaton-engine-properties \ No newline at end of file diff --git a/docs/documentation/user-guide/process-engine/database/database-configuration.md b/docs/documentation/user-guide/process-engine/database/database-configuration.md index 3b014db..2d836a9 100644 --- a/docs/documentation/user-guide/process-engine/database/database-configuration.md +++ b/docs/documentation/user-guide/process-engine/database/database-configuration.md @@ -29,10 +29,9 @@ The data source that is constructed based on the provided JDBC properties will h ## Jdbc Batch Processing -Another configuration - `jdbcBatchProcessing` - sets if batch processing mode must be used when sending SQL statements to the database. When switched off, statements are executed one by one. +Another configuration - [jdbcBatchProcessing](../../../../documentation/reference/deployment-descriptors/tags/process-engine.mdx#jdbcBatchProcessing) - sets if batch processing mode must be used when sending SQL statements to the database. When switched off, statements are executed one by one. Values: `true` (default), `false`. -Known issues with batch processing: * batch processing is not working for Oracle versions earlier than 12. * when using batch processing on MariaDB and DB2, `jdbcStatementTimeout` is being ignored. @@ -179,4 +178,4 @@ When initializing the engine, a check is performed in order to determine if the This behaviour can be disabled by setting the `skipIsolationLevelCheck` flag to `true`. Doing this will prevent an exception from being thrown and a warning message will be logged instead. -[See here](../../../reference/deployment-descriptors/tags/process-engine.md#configuration-properties) for more details about this and other properties. +[See here](../../../reference/deployment-descriptors/tags/process-engine.mdx#configuration-properties) for more details about this and other properties. diff --git a/docs/documentation/user-guide/process-engine/database/performance.md b/docs/documentation/user-guide/process-engine/database/performance.md index 8dbcea6..4d41fde 100644 --- a/docs/documentation/user-guide/process-engine/database/performance.md +++ b/docs/documentation/user-guide/process-engine/database/performance.md @@ -19,4 +19,4 @@ The task query is one of the heaviest used and most powerful queries of the proc ### Disabling CMMN and Standalone Tasks -To perform transparent access checks, the task query joins the authorization table (`ACT_RU_AUTHORIZATION`). For any kind of process-related filters, it joins the process definition table (`ACT_RE_PROCDEF`). By default, the query uses a left join for these operations. If CMMN and standalone tasks (tasks that are neither related to a BPMN process, nor a CMMN case) are not used, the engine configuration flags `cmmnEnabled` and `standaloneTasksEnabled` can be set to `false`. Then, the left joins are replaced by inner joins which perform better on some databases. See the [configuration properties reference](../../../reference/deployment-descriptors/tags/process-engine.md#configuration-properties) for details on these settings. +To perform transparent access checks, the task query joins the authorization table (`ACT_RU_AUTHORIZATION`). For any kind of process-related filters, it joins the process definition table (`ACT_RE_PROCDEF`). By default, the query uses a left join for these operations. If CMMN and standalone tasks (tasks that are neither related to a BPMN process, nor a CMMN case) are not used, the engine configuration flags `cmmnEnabled` and `standaloneTasksEnabled` can be set to `false`. Then, the left joins are replaced by inner joins which perform better on some databases. See the [configuration properties reference](../../../reference/deployment-descriptors/tags/process-engine.mdx#configuration-properties) for details on these settings. diff --git a/docs/documentation/user-guide/process-engine/error-handling.md b/docs/documentation/user-guide/process-engine/error-handling.md index 09e2af6..ef6bbae 100644 --- a/docs/documentation/user-guide/process-engine/error-handling.md +++ b/docs/documentation/user-guide/process-engine/error-handling.md @@ -143,7 +143,7 @@ Learn more on how to assign a custom error code to an exception in the documenta ### Configuration -You can configure the exception error codes feature in your [process engine configuration](../../reference/deployment-descriptors/tags/process-engine.md#exception-codes): +You can configure the exception error codes feature in your [process engine configuration](../../reference/deployment-descriptors/tags/process-engine.mdx#exception-codes): * To disable the exception codes feature entirely, set the flag disableExceptionCode in your process engine configuration to true. diff --git a/docs/documentation/user-guide/process-engine/expression-language.md b/docs/documentation/user-guide/process-engine/expression-language.md index 6b17059..3e9d963 100644 --- a/docs/documentation/user-guide/process-engine/expression-language.md +++ b/docs/documentation/user-guide/process-engine/expression-language.md @@ -154,7 +154,7 @@ The following example shows usage of expression language as condition of a seque ```xml - ```${test == 'foo'} + ${test == 'foo'} ``` @@ -185,7 +185,7 @@ a bean. - ```${myBean.calculateX()} + ${myBean.calculateX()} @@ -243,7 +243,7 @@ conditional sequence flow can directly check a variable value: ```xml - ```${test == 'start'} + ${test == 'start'} ``` diff --git a/docs/documentation/user-guide/process-engine/history/history-cleanup.md b/docs/documentation/user-guide/process-engine/history/history-cleanup.md index 6614526..83d5168 100644 --- a/docs/documentation/user-guide/process-engine/history/history-cleanup.md +++ b/docs/documentation/user-guide/process-engine/history/history-cleanup.md @@ -147,7 +147,7 @@ The history cleanup jobs can be found via the API method `HistoryService#findHis #### Required property -The history time to live is mandatory, any deployment or re-deployment of any model resource (BPMN, DMN, CMMN) that contains a historyTimeToLive of null will be prevented. Unless explicitly disabled via [process engine configuration](../../../reference/deployment-descriptors/tags/process-engine.md#enforceHistoryTimeToLive). To define a default TTL for process definitions and decision definitions if no other value is defined check [historyTimeToLive configuration](../../../reference/deployment-descriptors/tags/process-engine.md#historyTimeToLive). +The history time to live is mandatory, any deployment or re-deployment of any model resource (BPMN, DMN, CMMN) that contains a historyTimeToLive of null will be prevented. Unless explicitly disabled via [process engine configuration](../../../reference/deployment-descriptors/tags/process-engine.mdx#enforceHistoryTimeToLive). To define a default TTL for process definitions and decision definitions if no other value is defined check [historyTimeToLive configuration](../../../reference/deployment-descriptors/tags/process-engine.mdx#historyTimeToLive). #### Process/decision/case definitions @@ -179,7 +179,7 @@ Setting the value to `null` clears the TTL. The same can be done via restref pag For decision and case definitions, TTL can be defined in a similar way. In case you want to provide an engine-wide default TTL for all process, decision and case definitions, -use the ["historyTimeToLive" attribute](../../../reference/deployment-descriptors/tags/process-engine.md#historytimetolive) +use the ["historyTimeToLive" attribute](../../../reference/deployment-descriptors/tags/process-engine.mdx#historytimetolive) of the process engine configuration. This value is applied as the default whenever new definitions without TTL are deployed. Note that it therefore does not change the TTL of already deployed definitions. Use the API method given above to change TTL in this case. #### Batches @@ -344,4 +344,4 @@ related to the cleanup execution since the particular node ignores them. **Please Note:** The history cleanup configuration properties that are unrelated to the cleanup execution (e.g., time to live, removal time strategy) still need to be defined among all nodes. -[configuration-options]: ../../../reference/deployment-descriptors/tags/process-engine.md#history-cleanup-configuration-parameters \ No newline at end of file +[configuration-options]: ../../../reference/deployment-descriptors/tags/process-engine.mdx#history-cleanup-configuration-parameters \ No newline at end of file diff --git a/docs/documentation/user-guide/process-engine/identity-service.md b/docs/documentation/user-guide/process-engine/identity-service.md index 262a796..ddefd27 100644 --- a/docs/documentation/user-guide/process-engine/identity-service.md +++ b/docs/documentation/user-guide/process-engine/identity-service.md @@ -395,7 +395,7 @@ The mechanism is configurable with the following properties and respective defau * `loginDelayMaxTime=60` * `loginDelayBase=3` -For more information, please check the process engine's [login properties](../../reference/deployment-descriptors/tags/process-engine.md#login-parameters) section. +For more information, please check the process engine's [login properties](../../reference/deployment-descriptors/tags/process-engine.mdx#login-parameters) section. Calculation of the delay is done via the formula: baseTime * factor^(attempt-1). The behaviour with the default configuration will be: diff --git a/docs/documentation/user-guide/process-engine/multi-tenancy.md b/docs/documentation/user-guide/process-engine/multi-tenancy.md index 383ad00..a9c9bc4 100644 --- a/docs/documentation/user-guide/process-engine/multi-tenancy.md +++ b/docs/documentation/user-guide/process-engine/multi-tenancy.md @@ -485,7 +485,7 @@ If different tenants should work on entirely different databases, they have to u For schema- or table-based isolation, a single data source can be used which means that resources like a connection pool can be shared among multiple engines. To achieve this, -* the configuration option [databaseTablePrefix](../../reference/deployment-descriptors/tags/process-engine.md#configuration-protperties) can be used to configure database access. +* the configuration option [databaseTablePrefix](../../reference/deployment-descriptors/tags/process-engine.mdx#configuration-protperties) can be used to configure database access. * consider switching on the setting `useSharedSqlSessionFactory`. The setting controls whether each process engine instance should parse and maintain a local copy of the mybatis mapping files or whether a single, shared copy can be used. Since the mappings require a lot of heap (>30MB), it is recommended to switch this on. This way only one copy needs to be allocated. :::warning[Considerations for useSharedSqlSessionFactory setting] diff --git a/docs/documentation/user-guide/process-engine/process-engine-api.md b/docs/documentation/user-guide/process-engine/process-engine-api.md index e9c8ed2..42f7ca0 100644 --- a/docs/documentation/user-guide/process-engine/process-engine-api.md +++ b/docs/documentation/user-guide/process-engine/process-engine-api.md @@ -113,7 +113,7 @@ You can find more information on this in the Java Querying for results without restricting the maximum number of results or querying for a vast number of results can lead to a high memory consumption or even to out of memory exceptions. With the help -of the [Query Maximum Results Limit](../../reference/deployment-descriptors/tags/process-engine.md#queryMaxResultsLimit), +of the [Query Maximum Results Limit](../../reference/deployment-descriptors/tags/process-engine.mdx#queryMaxResultsLimit), you can restrict the maximum number of results. This restriction is only enforced in the following cases: diff --git a/docs/documentation/user-guide/process-engine/process-instance-migration.md b/docs/documentation/user-guide/process-engine/process-instance-migration.md index 877edee..c5b27ba 100644 --- a/docs/documentation/user-guide/process-engine/process-instance-migration.md +++ b/docs/documentation/user-guide/process-engine/process-instance-migration.md @@ -746,7 +746,7 @@ the following requirements: * The migration plan adheres to [BPMN-element-specific considerations](#bpmn-specific-api-and-effects) * A set variable must not be of type `Object` **AND** its `serializationFormat` must not be `application/x-java-serialized-object` * Validation is skipped when the engine configuration flag `javaSerializationFormatEnabled` is set to `true` - * Please see [Process Engine Configuration Reference](../../reference/deployment-descriptors/tags/process-engine.md#javaSerializationFormatEnabled) for more details + * Please see [Process Engine Configuration Reference](../../reference/deployment-descriptors/tags/process-engine.mdx#javaSerializationFormatEnabled) for more details If validation reports errors, migration fails with a `MigrationPlanValidationException` providing a `MigrationPlanValidationReport` object with details on the diff --git a/docs/documentation/user-guide/process-engine/scripting.md b/docs/documentation/user-guide/process-engine/scripting.md index 464af0f..2c4d79c 100644 --- a/docs/documentation/user-guide/process-engine/scripting.md +++ b/docs/documentation/user-guide/process-engine/scripting.md @@ -350,11 +350,11 @@ Note that for JavaScript execution you might be able to choose the script engine You can use the following process engine configuration flags to influence the configuration of specific script engines: -* [configureScriptEngineHostAccess](../../reference/deployment-descriptors/tags/process-engine.md#configureScriptEngineHostAccess) - +* [configureScriptEngineHostAccess](../../reference/deployment-descriptors/tags/process-engine.mdx#configureScriptEngineHostAccess) - Specifies whether host language resources like classes and their methods are accessible or not. -* [enableScriptEngineLoadExternalResources](../../reference/deployment-descriptors/tags/process-engine.md#enableScriptEngineLoadExternalResources) - +* [enableScriptEngineLoadExternalResources](../../reference/deployment-descriptors/tags/process-engine.mdx#enableScriptEngineLoadExternalResources) - Specifies whether external resources can be loaded from file system or not. -* [enableScriptEngineNashornCompatibility](../../reference/deployment-descriptors/tags/process-engine.md#enableScriptEngineNashornCompatibility) - +* [enableScriptEngineNashornCompatibility](../../reference/deployment-descriptors/tags/process-engine.mdx#enableScriptEngineNashornCompatibility) - Specifies whether Nashorn compatibility mode is enabled or not. ### System properties diff --git a/docs/documentation/user-guide/process-engine/the-job-executor.md b/docs/documentation/user-guide/process-engine/the-job-executor.md index fc05323..eb8a898 100644 --- a/docs/documentation/user-guide/process-engine/the-job-executor.md +++ b/docs/documentation/user-guide/process-engine/the-job-executor.md @@ -260,7 +260,7 @@ In addition, the process engine has a concept of job suspension. For example, a To optimize the acquisition of jobs that need to be executed immediately, the `DUEDATE_` column is not set (`null`) and a (positive) null check is added as a condition for acquisition. -In case each job must have a `DUEDATE_` set, the optimization can be disabled. This can be done by setting the `ensureJobDueDateNotNull` [process engine configuration flag](../../reference/deployment-descriptors/tags/process-engine.md#ensureJobDueDateNotNull) to `true`. +In case each job must have a `DUEDATE_` set, the optimization can be disabled. This can be done by setting the `ensureJobDueDateNotNull` [process engine configuration flag](../../reference/deployment-descriptors/tags/process-engine.mdxx#ensureJobDueDateNotNull) to `true`. However, any jobs created with a `null` value for `DUEDATE_` before disabling the optimization will not be picked up by the Job Acquisition phase, unless the jobs are explicitly updated with a due date through the **Set Due Date** [Java](https://docs.operaton.org/reference/latest/javadoc/org/operaton/bpm/engine/ManagementService.html#setJobDuedate(java.lang.String,java.util.Date)) / [Rest](https://docs.operaton.org/reference/latest/rest-api/#tag/Job/operation/setJobRetries) or **Set Retries** Java / [REST](https://docs.operaton.org/reference/latest/rest-api/#tag/Job/operation/setJobRetries) APIs. @@ -370,7 +370,7 @@ For example: By default, the Job Executor executes all jobs regardless of their priorities. Some jobs might be more important to finish quicker than others, so we assign them priorities and set `jobExecutorAcquireByPriority` to `true` as described above. Depending on the workload, the Job Executor might be able to execute all jobs eventually. But if the load is high enough, we might face starvation where a Job Executor is always busy working on high-priority jobs and never manages to execute the lower priority jobs. -To prevent this, you can specify a priority range for the job executor by setting values for [`jobExecutorPriorityRangeMin`](../../reference/deployment-descriptors/tags/process-engine.md#jobExecutorPriorityRangeMin) or [`jobExecutorPriorityRangeMax`](../../reference/deployment-descriptors/tags/process-engine.md#jobExecutorPriorityRangeMax) (or both). The Job Executor will only acquire jobs that are inside its priority range (inclusive). Both properties are optional, so it is fine only to set one of them. +To prevent this, you can specify a priority range for the job executor by setting values for [`jobExecutorPriorityRangeMin`](../../reference/deployment-descriptors/tags/process-engine.mdx#jobExecutorPriorityRangeMin) or [`jobExecutorPriorityRangeMax`](../../reference/deployment-descriptors/tags/process-engine.mdxx#jobExecutorPriorityRangeMax) (or both). The Job Executor will only acquire jobs that are inside its priority range (inclusive). Both properties are optional, so it is fine only to set one of them. To avoid job starvation, make sure to have no gaps between Job Executor priority ranges. If, for example, Job Executor A has a priority range of 0 to 100 and Job Executor B executes jobs from priority 200 to `Long.MAX_VALUE` any job that receives a priority of 101 to 199 will never be executed. Job starvation can also occur with `batch jobs` and `history cleanup jobs` as both types of jobs also receive priorities (default: `0`). You can configure them via their respective properties: `batchJobPriority` and `historyCleanupJobPriority`. @@ -605,7 +605,7 @@ If there is a use case where the subprocess-jobs **should not be performed in pa ``` :::warning -The property `jobExecutorAcquireExclusiveOverProcessHierarchies` is by default set to `false`. See the property under the `Configuration Properties` section. +The property `jobExecutorAcquireExclusiveOverProcessHierarchies` is by default set to `false`. See the property under the `Configuration Properties` section. Keep in mind that enabling the feature to guarantee exclusive jobs across all subprocesses originating from a root process might have performance implications, especially for process definitions that involve complex and numerous hierarchies. diff --git a/docs/documentation/user-guide/process-engine/variables.md b/docs/documentation/user-guide/process-engine/variables.md index 5bf19f8..1fb0503 100644 --- a/docs/documentation/user-guide/process-engine/variables.md +++ b/docs/documentation/user-guide/process-engine/variables.md @@ -313,7 +313,7 @@ com.example.Order retrievedOrder = (com.example.Order) retrievedTypedObjectValue :::warning[Java serialization format] Be aware that when using a serialized representation of variables, the Java serialization format is forbidden by default. You should either use another format (JSON or XML) or explicitly enable the Java serialization - with the help of the [`javaSerializationFormatEnabled`](../../reference/deployment-descriptors/tags/process-engine.md#javaSerializationFormatEnabled) configuration flag. + with the help of the [`javaSerializationFormatEnabled`](../../reference/deployment-descriptors/tags/process-engine.mdx#javaSerializationFormatEnabled) configuration flag. However, please make sure to read the [Security Implication](../security.md#variable-values-from-untrusted-sources) first before enabling this. ::: @@ -481,7 +481,7 @@ Input mappings can also be used with multi-instance constructs, in which the map If an Activity is canceled (e.g. due to throwing a BPMN error), IO mapping is still executed. This can lead to exceptions if the output mapping references variables that do not exist in the scope of the activity at that time. -The default behavior is that the engine still tries to execute output mappings on canceled activities and fails with an exception if a variable is not found. By enabling the [skipOutputMappingOnCanceledActivities](../../reference/deployment-descriptors/tags/process-engine.md#skipOutputMappingOnCanceledActivities) engine configuration flag (i.e. setting it to `true`) the engine will not perform output mappings on any canceled activity. +The default behavior is that the engine still tries to execute output mappings on canceled activities and fails with an exception if a variable is not found. By enabling the [skipOutputMappingOnCanceledActivities](../../reference/deployment-descriptors/tags/process-engine.mdx#skipOutputMappingOnCanceledActivities) engine configuration flag (i.e. setting it to `true`) the engine will not perform output mappings on any canceled activity. [inputOutput]: ../../reference/bpmn20/custom-extensions/extension-elements.md#inputoutput [inputParameter]: ../../reference/bpmn20/custom-extensions/extension-elements.md#inputparameter diff --git a/docs/documentation/user-guide/quarkus-integration/configuration.md b/docs/documentation/user-guide/quarkus-integration/configuration.md index 8e86fa0..41ae1ca 100644 --- a/docs/documentation/user-guide/quarkus-integration/configuration.md +++ b/docs/documentation/user-guide/quarkus-integration/configuration.md @@ -238,7 +238,7 @@ quarkus.datasource.my-datasource.jdbc.url=jdbc:h2:mem:operaton;TRACE_LEVEL_FILE= quarkus.operaton.datasource=my-datasource ``` -[engine-properties]: ../../reference/deployment-descriptors/tags/process-engine.md#configuration-properties +[engine-properties]: ../../reference/deployment-descriptors/tags/process-engine.mdx#configuration-properties [executor-properties]: ../../reference/deployment-descriptors/tags/job-executor.md#job-acquisition-configuration-properties [quarkus-datasource]: https://quarkus.io/guides/datasource diff --git a/docs/documentation/user-guide/runtime-container-integration/jboss.md b/docs/documentation/user-guide/runtime-container-integration/jboss.md index bf57e60..1982cc8 100644 --- a/docs/documentation/user-guide/runtime-container-integration/jboss.md +++ b/docs/documentation/user-guide/runtime-container-integration/jboss.md @@ -88,7 +88,7 @@ It should be easy to see that the configuration consists of a single process eng If you start up your Wildfly server with this configuration, it will automatically create the corresponding services and expose them through the management model. -For a complete list of all configuration options, please refer to the [Process Engine Configuration](../../reference/deployment-descriptors/tags/process-engine.md). +For a complete list of all configuration options, please refer to the [Process Engine Configuration](../../reference/deployment-descriptors/tags/process-engine.mdx). ## Provide a Custom Process Engine Configuration Class diff --git a/docs/documentation/user-guide/security.md b/docs/documentation/user-guide/security.md index 8fefcab..4464022 100644 --- a/docs/documentation/user-guide/security.md +++ b/docs/documentation/user-guide/security.md @@ -82,7 +82,7 @@ Note that changing the time to live to a lower value can harm the performance of #### Enable authentication logging in the Operaton web apps It is generally recommended to enable logging of log in attempts (successful and failed) as well as log out events. -In Operaton, you can enable authentication logging in the Operaton web apps by setting the `webappsAuthenticationLoggingEnabled` process engine [configuration flag](../reference/deployment-descriptors/tags/process-engine.md#webappsAuthenticationLoggingEnabled) to true. All user-initiated log in and log out events will then be logged to the application log using the `org.operaton.bpm.webapp` [logger](../user-guide/logging.md#process-engine). +In Operaton, you can enable authentication logging in the Operaton web apps by setting the `webappsAuthenticationLoggingEnabled` process engine [configuration flag](../reference/deployment-descriptors/tags/process-engine.mdx#webappsAuthenticationLoggingEnabled) to true. All user-initiated log in and log out events will then be logged to the application log using the `org.operaton.bpm.webapp` [logger](../user-guide/logging.md#process-engine). The following events produce log statements: @@ -238,7 +238,8 @@ results or querying for a vast number of results can lead to a high memory cons out of memory exceptions. You can mitigate the risk of an attack by defining a limit for the maximum number of results -(`queryMaxResultsLimit`) in the [process engine configuration](../reference/deployment-descriptors/tags/process-engine.md#queryMaxResultsLimit). +(`queryMaxResultsLimit`) in the [process engine configuration](../reference/deployment-descriptors/tags/process-engine.mdxx#queryMaxResultsLimit). + :::note[Heads-up!] To gain the full feature set of the Webapps, and not suffer any UX degradation due to unavailable data, the `queryMaxResultsLimit` must be set to `2000`. @@ -260,7 +261,7 @@ Operaton handles many XML files containing configurations of process engines, de * Prevention against XML eXternal Entity (XXE) injections according to [OWASP](https://github.com/OWASP/CheatSheetSeries/blob/master/cheatsheets/XML_External_Entity_Prevention_Cheat_Sheet.md) * Feature Secure Processing (FSP) of XML files according to [Oracle](https://docs.oracle.com/javase/8/docs/api/javax/xml/XMLConstants.html#FEATURE_SECURE_PROCESSING) which introduces [limits](https://docs.oracle.com/javase/tutorial/jaxp/limits/limits.html) for several XML properties -If the limitations on XML files introduced by XXE prevention need to be removed, XXE processing can be enabled via `enableXxeProcessing` in the [process engine configuration](../reference/deployment-descriptors/tags/process-engine.md#configuration-properties). +If the limitations on XML files introduced by XXE prevention need to be removed, XXE processing can be enabled via `enableXxeProcessing` in the [process engine configuration](../reference/deployment-descriptors/tags/process-engine.mdxx#enableXxeProcessing). FSP itself can not be disabled in the engine. All properties that are influenced by this can however be configured in the environment via system properties and the `jaxp.properties` file. See the [Oracle documentation](https://docs.oracle.com/javase/tutorial/jaxp/limits/using.html) on how to determine the right limits and how to set them. @@ -291,12 +292,12 @@ If an attacker can access these endpoints, they can exploit so-called _serializa ### Java objects using the JDK built-in `application/x-java-serialized-object` data format Starting with version 7.9, by default, it is not possible to set variables of type `Object` **AND** the data format `application/x-java-serialized-object`. -The behavior can be restored with the process engine configuration flag [`javaSerializationFormatEnabled`](../reference/deployment-descriptors/tags/process-engine.md#javaSerializationFormatEnabled). +The behavior can be restored with the process engine configuration flag [`javaSerializationFormatEnabled`](../reference/deployment-descriptors/tags/process-engine.mdxx#javaSerializationFormatEnabled). However, please bear in mind that enabling the java serialization format might make the process engine vulnerable against the aforementioned attacking scenario. ### JSON/XML serialized objects using Spin -Therefore, we recommend enabling the whitelisting of allowed Java classes by enabling the property [deserializationTypeValidationEnabled](../reference/deployment-descriptors/tags/process-engine.md#deserializationTypeValidationEnabled) in the process engine configuration. With this, the process engine validates the class names of submitted variables against a whitelist of allowed Java class and package names. Any non-whitelisted content is rejected. The default values are safe, but may be too restrictive for your use case. You can use the engine properties `deserializationAllowedPackages` and `deserializationAllowedClasses` to extend the default whitelist with package and class names of Java types that you consider save to deserialize in your environment. +Therefore, we recommend enabling the whitelisting of allowed Java classes by enabling the property [deserializationTypeValidationEnabled](../reference/deployment-descriptors/tags/process-engine.mdxx#deserializationTypeValidationEnabled) in the process engine configuration. With this, the process engine validates the class names of submitted variables against a whitelist of allowed Java class and package names. Any non-whitelisted content is rejected. The default values are safe, but may be too restrictive for your use case. You can use the engine properties `deserializationAllowedPackages` and `deserializationAllowedClasses` to extend the default whitelist with package and class names of Java types that you consider save to deserialize in your environment. In case this default behavior needs further adjustment, a custom validator can be implemented and registered in the engine with the engine property `deserializationTypeValidator`. The provided object needs to be a subtype of `org.operaton.bpm.engine.runtime.DeserializationTypeValidator` and offer an implementation of the `#validate` method. @@ -317,7 +318,7 @@ the `ACT_HI_OP_LOG` table. The amount of table entries depends on the number of Using the process engine configuration flag `logEntriesPerSyncOperationLimit`, the number of created entries to the user operation log can be limited for synchronous API calls. By default, one operation log entry is written per API call, regardless of how many entities were affected (default property value is `1`). If you choose to change `logEntriesPerSyncOperationLimit`, select a value that you are certain your system can handle. -For more information about the possible values for `logEntriesPerSyncOperationLimit`, visit the [configuration documentation](../reference/deployment-descriptors/tags/process-engine.md#logEntriesPerSyncOperationLimit). +For more information about the possible values for `logEntriesPerSyncOperationLimit`, visit the [configuration documentation](../reference/deployment-descriptors/tags/process-engine.mdxx#logEntriesPerSyncOperationLimit). Currently, the following APIs are affected: diff --git a/docs/documentation/user-guide/spring-boot-integration/configuration.mdx b/docs/documentation/user-guide/spring-boot-integration/configuration.mdx index 40d034c..98c7be8 100644 --- a/docs/documentation/user-guide/spring-boot-integration/configuration.mdx +++ b/docs/documentation/user-guide/spring-boot-integration/configuration.mdx @@ -909,7 +909,7 @@ server: ``` Further details of the session cookie like the `SameSite` flag can be configured via -[operaton.bpm.webapp.session-cookie](../spring-boot-integration/configuration.md#session-cookie) in the `application.yaml`. +[operaton.bpm.webapp.session-cookie](../spring-boot-integration/configuration.mdxx#session-cookie) in the `application.yaml`. # Configuring Spin DataFormats diff --git a/docs/documentation/user-guide/spring-boot-integration/process-applications.md b/docs/documentation/user-guide/spring-boot-integration/process-applications.md index ddf96fc..f85345a 100644 --- a/docs/documentation/user-guide/spring-boot-integration/process-applications.md +++ b/docs/documentation/user-guide/spring-boot-integration/process-applications.md @@ -28,7 +28,7 @@ public class MyApplication { } ``` -Some configuration can be done via Spring Boot configuration parameters. Check [the list of currently available parameters](../spring-boot-integration/configuration.md#operaton-bpm-application). +Some configuration can be done via Spring Boot configuration parameters. Check [the list of currently available parameters](../spring-boot-integration/configuration.mdx#operaton-bpm-application). ## Using Deployment Callbacks diff --git a/docs/documentation/webapps/shared-options/authentication.md b/docs/documentation/webapps/shared-options/authentication.md index d78e9f8..18a1a13 100644 --- a/docs/documentation/webapps/shared-options/authentication.md +++ b/docs/documentation/webapps/shared-options/authentication.md @@ -56,7 +56,7 @@ This section describes how to configure the authentication cache time to live. ##### Spring Boot -You can find the configuration properties for the Spring Boot Starter in the [User Guide](../../user-guide/spring-boot-integration/configuration.md#auth-cache). +You can find the configuration properties for the Spring Boot Starter in the [User Guide](../../user-guide/spring-boot-integration/configuration.mdx#auth-cache). ##### Java EE/Jakarta Servlet Application Servers/Runtimes diff --git a/docs/documentation/webapps/shared-options/cookie-security.md b/docs/documentation/webapps/shared-options/cookie-security.md index f6517d3..13316ad 100644 --- a/docs/documentation/webapps/shared-options/cookie-security.md +++ b/docs/documentation/webapps/shared-options/cookie-security.md @@ -118,7 +118,7 @@ Here you can find how to configure the session cookie for the following containe * [Tomcat](../../installation/full/tomcat/configuration.md#session-cookie-in-webapps) * [Wildfly](../../installation/full/wildfly/configuration.md#session-cookie-in-webapps) -* [Spring Boot](../../user-guide/spring-boot-integration/configuration.md#session-cookie) +* [Spring Boot](../../user-guide/spring-boot-integration/configuration.mdx#session-cookie) ### CSRF Cookie diff --git a/docs/documentation/webapps/shared-options/csrf-prevention.md b/docs/documentation/webapps/shared-options/csrf-prevention.md index e1d8e49..c5b926b 100644 --- a/docs/documentation/webapps/shared-options/csrf-prevention.md +++ b/docs/documentation/webapps/shared-options/csrf-prevention.md @@ -14,7 +14,7 @@ menu: A CSRF filter is enabled by default, validating each modifying request performed through the webapps. The filter implements a (per-session) _Synchronization Token_ method for CSRF validation with an optional _Same Origin with Standard Headers_ verification. In Spring Boot Starter, the configuration needs to be made in the `application.yaml`. -Please read more about it [here](../../user-guide/spring-boot-integration/configuration.md#csrf). +Please read more about it [here](../../user-guide/spring-boot-integration/configuration.mdx#csrf). If you would like to enable the additional _Same Origin with Standard Headers_ verification, the `targetOrigin` init-parameter should be set in the `web.xml` file of your application. That, and some additional optional initialization parameters are: diff --git a/docs/documentation/webapps/shared-options/header-security.md b/docs/documentation/webapps/shared-options/header-security.md index 390426f..92dc1fd 100644 --- a/docs/documentation/webapps/shared-options/header-security.md +++ b/docs/documentation/webapps/shared-options/header-security.md @@ -159,7 +159,7 @@ Choose a container from the list and learn where to configure the HTTP Security * [Tomcat](../../installation/full/tomcat/configuration.md#security-related-http-headers-in-webapps) * [Wildfly](../../installation/full/wildfly/configuration.md#security-related-http-headers-in-webapps) -* [Spring Boot](../../user-guide/spring-boot-integration/configuration.md) +* [Spring Boot](../../user-guide/spring-boot-integration/configuration.mdx) ## How to Configure? diff --git a/toolkit/check-doc-errors.js b/toolkit/check-doc-errors.js new file mode 100644 index 0000000..10d9e81 --- /dev/null +++ b/toolkit/check-doc-errors.js @@ -0,0 +1,260 @@ +#!/usr/bin/env node + +/** + * Documentation Error Checker + * + * Scans documentation for real errors that will cause problems: + * - Deprecated anchors (breaks navigation) + * - Duplicate IDs in actual HTML (not in code blocks) + * + * Run from your documentation project root: + * node check-doc-errors.js + * node check-doc-errors.js --fix + */ + +const fs = require('fs').promises; +const path = require('path'); + +const CONFIG = { + docsPath: './docs', + fix: process.argv.includes('--fix'), +}; + +const report = { + filesScanned: 0, + issues: [], + fixedCount: 0, +}; + +async function findDocFiles(dir) { + const files = []; + + async function search(currentDir) { + try { + const entries = await fs.readdir(currentDir, { withFileTypes: true }); + for (const entry of entries) { + const fullPath = path.join(currentDir, entry.name); + if (entry.isDirectory()) { + if (!entry.name.startsWith('_') && entry.name !== 'node_modules') { + await search(fullPath); + } + } else if (entry.isFile() && /\.(md|mdx)$/.test(entry.name)) { + files.push(fullPath); + } + } + } catch (err) { /* ignore */ } + } + + await search(dir); + return files; +} + +function getLineNumber(content, index) { + return content.substring(0, index).split('\n').length; +} + +/** + * Check if an index position is inside a code block + */ +function isInsideCodeBlock(content, index) { + // Check if inside fenced code block (```) + const beforeIndex = content.substring(0, index); + const fenceMatches = beforeIndex.match(/```/g); + if (fenceMatches && fenceMatches.length % 2 === 1) { + return true; // Odd number of ``` means we're inside a code block + } + + // Check if inside
 or  tags
+  const lastPreOpen = beforeIndex.lastIndexOf('');
+  if (lastPreOpen > lastPreClose) {
+    return true;
+  }
+  
+  // Check if the line starts with 4 spaces or tab (indented code block)
+  const lineStart = beforeIndex.lastIndexOf('\n') + 1;
+  const linePrefix = content.substring(lineStart, index);
+  if (/^(\s{4}|\t)/.test(linePrefix)) {
+    return true;
+  }
+  
+  // Check if inside inline code (`)
+  const lineEnd = content.indexOf('\n', index);
+  const line = content.substring(lineStart, lineEnd === -1 ? content.length : lineEnd);
+  const posInLine = index - lineStart;
+  
+  let inBacktick = false;
+  for (let i = 0; i < posInLine; i++) {
+    if (line[i] === '`') inBacktick = !inBacktick;
+  }
+  if (inBacktick) return true;
+  
+  return false;
+}
+
+/**
+ * Check for duplicate IDs in actual HTML elements (not in code blocks)
+ */
+function checkDuplicateIds(content, filePath) {
+  const issues = [];
+  const idPattern = /\bid=["']([^"']+)["']/gi;
+  const ids = new Map();
+  
+  let match;
+  while ((match = idPattern.exec(content)) !== null) {
+    // Skip if inside code block
+    if (isInsideCodeBlock(content, match.index)) {
+      continue;
+    }
+    
+    const id = match[1];
+    const line = getLineNumber(content, match.index);
+    
+    if (ids.has(id)) {
+      issues.push({
+        line,
+        match: `id="${id}"`,
+        message: `Duplicate ID "${id}" (first at line ${ids.get(id)})`,
+      });
+    } else {
+      ids.set(id, line);
+    }
+  }
+  
+  return issues;
+}
+
+/**
+ * Check for deprecated  anchors (not in code blocks)
+ */
+function checkDeprecatedAnchors(content, filePath) {
+  const issues = [];
+  const pattern = /\s*<\/a>/gi;
+  
+  let match;
+  while ((match = pattern.exec(content)) !== null) {
+    // Skip if inside code block
+    if (isInsideCodeBlock(content, match.index)) {
+      continue;
+    }
+    
+    const line = getLineNumber(content, match.index);
+    issues.push({
+      line,
+      match: match[0].substring(0, 80),
+      suggestion: `Change to id="${match[1]}"`,
+    });
+  }
+  
+  return issues;
+}
+
+async function checkFile(filePath) {
+  try {
+    let content = await fs.readFile(filePath, 'utf8');
+    const relPath = path.relative(CONFIG.docsPath, filePath).replace(/\\/g, '/');
+    let modified = false;
+    
+    // Check for duplicate IDs
+    const dupes = checkDuplicateIds(content, filePath);
+    for (const dupe of dupes) {
+      report.issues.push({
+        file: relPath,
+        name: 'Duplicate ID attributes',
+        line: dupe.line,
+        match: dupe.match,
+        message: dupe.message,
+      });
+    }
+    
+    // Check for deprecated anchors
+    const anchors = checkDeprecatedAnchors(content, filePath);
+    for (const anchor of anchors) {
+      report.issues.push({
+        file: relPath,
+        name: 'Deprecated  anchor',
+        line: anchor.line,
+        match: anchor.match,
+        suggestion: anchor.suggestion,
+      });
+    }
+    
+    // Apply fix for  in table cells
+    if (CONFIG.fix) {
+      const fixPattern = /<\/a>/gi;
+      const before = content;
+      content = content.replace(fixPattern, '');
+      if (content !== before) {
+        modified = true;
+        report.fixedCount += (before.match(fixPattern) || []).length;
+      }
+    }
+    
+    if (modified) {
+      await fs.writeFile(filePath, content, 'utf8');
+    }
+    
+  } catch (err) {
+    console.error(`Error: ${filePath}: ${err.message}`);
+  }
+}
+
+async function main() {
+  console.log('🔍 Documentation Error Checker\n');
+  
+  try {
+    await fs.access(CONFIG.docsPath);
+  } catch {
+    console.error('Error: docs/ directory not found');
+    process.exit(1);
+  }
+  
+  const files = await findDocFiles(CONFIG.docsPath);
+  report.filesScanned = files.length;
+  
+  for (const file of files) {
+    await checkFile(file);
+  }
+  
+  // Print results
+  console.log('='.repeat(60));
+  
+  if (report.issues.length === 0) {
+    console.log('✅ No errors found!\n');
+  } else {
+    console.log(`❌ Found ${report.issues.length} error(s):\n`);
+    
+    // Group by file
+    const byFile = {};
+    report.issues.forEach(issue => {
+      if (!byFile[issue.file]) byFile[issue.file] = [];
+      byFile[issue.file].push(issue);
+    });
+    
+    for (const [file, issues] of Object.entries(byFile)) {
+      console.log(`  ${file}:`);
+      issues.forEach(issue => {
+        console.log(`    Line ${issue.line}: ${issue.name}`);
+        console.log(`      ${issue.match}`);
+        if (issue.suggestion) console.log(`      → ${issue.suggestion}`);
+        if (issue.message) console.log(`      → ${issue.message}`);
+      });
+      console.log();
+    }
+  }
+  
+  console.log('='.repeat(60));
+  console.log(`Files scanned: ${report.filesScanned}`);
+  console.log(`Errors found: ${report.issues.length}`);
+  if (CONFIG.fix) console.log(`Errors fixed: ${report.fixedCount}`);
+  console.log('='.repeat(60) + '\n');
+  
+  if (!CONFIG.fix && report.issues.length > 0) {
+    console.log('💡 Run with --fix to auto-fix  issues\n');
+  }
+}
+
+main().catch(err => {
+  console.error('Fatal error:', err);
+  process.exit(1);
+});
diff --git a/toolkit/check-doc-errors.log b/toolkit/check-doc-errors.log
new file mode 100644
index 0000000..dde4306
--- /dev/null
+++ b/toolkit/check-doc-errors.log
@@ -0,0 +1,25 @@
+$ node check-doc-errors.js 
+� Documentation Error Checker
+
+============================================================
+❌ Found 3 error(s):
+
+  documentation/reference/bpmn20/tasks/task-markers.md:
+    Line 105: Duplicate ID attributes
+      id="miTasks"
+      → Duplicate ID "miTasks" (first at line 95)
+
+  documentation/user-guide/process-engine/database/database-configuration.md:     
+    Line 32: Deprecated  anchor
+      
+      → Change to id="jdbcBatchProcessing"
+
+  documentation/user-guide/process-engine/expression-language.md:
+    Line 385: Duplicate ID attributes
+      id="task"
+      → Duplicate ID "task" (first at line 184)
+
+============================================================
+Files scanned: 413
+Errors found: 3
+============================================================
\ No newline at end of file
diff --git a/toolkit/fix-broken-links.js b/toolkit/fix-broken-links.js
new file mode 100644
index 0000000..01d9f90
--- /dev/null
+++ b/toolkit/fix-broken-links.js
@@ -0,0 +1,175 @@
+#!/usr/bin/env node
+
+/**
+ * Fix Broken Markdown Links
+ * 
+ * This script fixes broken markdown links where:
+ * 1. Links point to .md files but actual files are .mdx
+ * 2. Links have incorrect relative paths
+ * 
+ * Run from your documentation project root:
+ *   node fix-broken-links.js --dry-run
+ *   node fix-broken-links.js
+ */
+
+const fs = require('fs').promises;
+const path = require('path');
+
+const CONFIG = {
+  docsPath: './docs',
+  dryRun: process.argv.includes('--dry-run'),
+};
+
+// Known fixes based on the warnings
+const LINK_FIXES = [
+  // process-engine.md -> process-engine.mdx (in deployment-descriptors/tags/)
+  {
+    pattern: /(deployment-descriptors\/tags\/process-engine)\.md(#[^\s)\]]*)?/g,
+    replacement: '$1.mdx$2',
+    description: 'Fix process-engine.md -> process-engine.mdx'
+  },
+  // configuration.md -> configuration.mdx (for spring-boot-integration)
+  {
+    pattern: /(spring-boot-integration\/configuration)\.md(#[^\s)\]]*)?/g,
+    replacement: '$1.mdx$2',
+    description: 'Fix spring-boot-integration/configuration.md -> configuration.mdx'
+  },
+];
+
+const report = {
+  filesScanned: 0,
+  filesModified: 0,
+  fixesApplied: 0,
+  fixes: []
+};
+
+async function findMarkdownFiles(dir) {
+  const files = [];
+  
+  async function search(currentDir) {
+    try {
+      const entries = await fs.readdir(currentDir, { withFileTypes: true });
+      
+      for (const entry of entries) {
+        const fullPath = path.join(currentDir, entry.name);
+        
+        if (entry.isDirectory()) {
+          if (entry.name.startsWith('_') || entry.name === 'node_modules') {
+            continue;
+          }
+          await search(fullPath);
+        } else if (entry.isFile() && /\.(md|mdx)$/.test(entry.name)) {
+          files.push(fullPath);
+        }
+      }
+    } catch (err) {
+      // Ignore errors
+    }
+  }
+  
+  await search(dir);
+  return files;
+}
+
+async function fixFile(filePath) {
+  try {
+    let content = await fs.readFile(filePath, 'utf8');
+    let modified = false;
+    const fileRelPath = path.relative(CONFIG.docsPath, filePath);
+    
+    for (const fix of LINK_FIXES) {
+      const matches = content.match(fix.pattern);
+      if (matches) {
+        const originalContent = content;
+        content = content.replace(fix.pattern, fix.replacement);
+        
+        if (content !== originalContent) {
+          modified = true;
+          report.fixesApplied += matches.length;
+          report.fixes.push({
+            file: fileRelPath,
+            pattern: fix.description,
+            count: matches.length,
+            matches: matches.slice(0, 3) // Show first 3 matches
+          });
+        }
+      }
+    }
+    
+    if (modified && !CONFIG.dryRun) {
+      await fs.writeFile(filePath, content, 'utf8');
+      report.filesModified++;
+    } else if (modified) {
+      report.filesModified++;
+    }
+    
+    return modified;
+  } catch (err) {
+    console.error(`Error processing ${filePath}: ${err.message}`);
+    return false;
+  }
+}
+
+async function main() {
+  console.log('🔗 Fixing broken markdown links...\n');
+  console.log(`Mode: ${CONFIG.dryRun ? 'DRY RUN (no changes)' : 'LIVE'}\n`);
+  
+  // Check if docs directory exists
+  try {
+    await fs.access(CONFIG.docsPath);
+  } catch {
+    console.error('Error: docs/ directory not found');
+    console.error('Please run this script from your Docusaurus project root');
+    process.exit(1);
+  }
+  
+  // Find all markdown files
+  console.log('Scanning for markdown files...');
+  const mdFiles = await findMarkdownFiles(CONFIG.docsPath);
+  report.filesScanned = mdFiles.length;
+  console.log(`Found ${mdFiles.length} markdown files\n`);
+  
+  // Process each file
+  console.log('Processing files...');
+  for (const file of mdFiles) {
+    const modified = await fixFile(file);
+    if (modified) {
+      const relPath = path.relative(CONFIG.docsPath, file);
+      console.log(`  ✓ ${relPath}`);
+    }
+  }
+  
+  // Print report
+  console.log('\n' + '='.repeat(70));
+  console.log('SUMMARY');
+  console.log('='.repeat(70));
+  console.log(`Files scanned: ${report.filesScanned}`);
+  console.log(`Files modified: ${report.filesModified}`);
+  console.log(`Total fixes applied: ${report.fixesApplied}`);
+  
+  if (report.fixes.length > 0) {
+    console.log('\nFixes by file:');
+    report.fixes.forEach(fix => {
+      console.log(`\n  ${fix.file}`);
+      console.log(`    ${fix.pattern}: ${fix.count} occurrences`);
+      fix.matches.forEach(m => console.log(`      - ${m}`));
+    });
+  }
+  
+  console.log('='.repeat(70) + '\n');
+  
+  if (CONFIG.dryRun) {
+    console.log('💡 This was a dry run. Run without --dry-run to apply changes.\n');
+  } else {
+    console.log('✅ Fixes applied!\n');
+    console.log('Next steps:');
+    console.log('1. Run npm start to verify the warnings are fixed');
+    console.log('2. Check git diff to review changes');
+    console.log('3. Commit the changes\n');
+  }
+}
+
+main().catch(err => {
+  console.error('Fatal error:', err);
+  process.exit(1);
+});