From e999b1ba86ec29ab6e971b65ac3a7e329596f7bf Mon Sep 17 00:00:00 2001 From: Ilian Iliev Date: Thu, 7 May 2026 16:21:06 +0300 Subject: [PATCH 1/2] Explaining the caveats in writing to the same key with workaround example --- .../data-pipelines/data-denormalization.md | 6 ++ .../redis-write-same-key.md | 102 ++++++++++++++++++ 2 files changed, 108 insertions(+) create mode 100644 content/integrate/redis-data-integration/data-pipelines/transform-examples/redis-write-same-key.md diff --git a/content/integrate/redis-data-integration/data-pipelines/data-denormalization.md b/content/integrate/redis-data-integration/data-pipelines/data-denormalization.md index c7ea293ae3..4f28773471 100644 --- a/content/integrate/redis-data-integration/data-pipelines/data-denormalization.md +++ b/content/integrate/redis-data-integration/data-pipelines/data-denormalization.md @@ -104,6 +104,12 @@ In the example above, the `addresses` job uses the default key pattern to write You can also use custom keys for the parent entity, as long as you use the same key for all jobs that write to the same Redis key. +{{< note >}} +In the case of using the same key for different jobs, deleting any of the entities will result in the key being remove from the target. +For an example workaround, see [Write to the same key from multiple jobs]({{< relref "/integrate/redis-data-integration/data-pipelines/transform-examples/redis-write-same-key" >}}). +{{< /note >}} + + ## Joining one-to-many relationships To join one-to-many relationships, you can use the *Nesting* strategy. diff --git a/content/integrate/redis-data-integration/data-pipelines/transform-examples/redis-write-same-key.md b/content/integrate/redis-data-integration/data-pipelines/transform-examples/redis-write-same-key.md new file mode 100644 index 0000000000..32c56dbce0 --- /dev/null +++ b/content/integrate/redis-data-integration/data-pipelines/transform-examples/redis-write-same-key.md @@ -0,0 +1,102 @@ +--- +Title: Write to the same key from multiple jobs +alwaysopen: false +categories: +- docs +- integrate +- rs +- rdi +description: null +group: di +linkTitle: Write to the same key +summary: Redis Data Integration keeps Redis in sync with the primary database in near + real time. +type: integration +weight: 100 +--- + +Use this pattern when two or more jobs write related source entities, such as +`customer` and `address`, to the same Redis JSON document. + +When multiple jobs write to the same Redis key, a delete event from any of the +source entities can delete the key from the target. To work around this, use +[`row_format: full`]({{< relref "/integrate/redis-data-integration/data-pipelines/transform-examples/redis-row-format#full" >}}) +so the job can inspect the +[`opcode`]({{< relref "/integrate/redis-data-integration/data-pipelines/transform-examples/redis-opcode-example" >}}), +convert delete events into update events before writing to Redis, and write JSON +documents with `on_update: merge`. + +{{< note >}} +Use the same key expression in all jobs that write to the shared Redis key. For +delete events, read key values from `before` or `key` because `after` is `null`. +{{< /note >}} + +## Customer job + +```yaml +# jobs/customer.yaml +name: customers + +source: + table: customers + +output: + - uses: redis.write + with: + data_type: json + on_update: merge + key: + expression: concat(['customer:', id]) + language: jmespath + +``` + +## Address job + +For delete events from the `addresses` table, this job sets all fields to `null` +to instruct RDI to remove them from the target JSON document. This behavior is +available in RDI 1.15.0 or later when native JSON merge is enabled and the target +database uses RedisJSON 2.6.0 or later. + +```yaml +# jobs/addresses.yaml +name: addresses + +source: + table: addresses + row_format: full + +transform: + - uses: add_field + with: + fields: + # For create/update records, we take the new values as is. + # If the record is a deletion, we set all fields to null. + - field: after + expression: | + (opcode != 'd' && after) + || + from_entries(to_entries(before)[].{key: key, value: `null`}) + language: jmespath + + # Treat deletes as updates so that we can use the same output configuration + - field: opcode + expression: opcode == 'd' && 'u' || opcode + language: jmespath + + # If you have overlapping field names e.g. FK and PK have the same name, or both table have + # a field called "id" and may want to remove the field from the after object to prevent it + # from overwriting the PK. + - uses: remove_field + with: + field: after.id + +output: + - uses: redis.write + with: + data_type: json + on_update: merge + key: + expression: concat(['customer:', after.customer_id || before.customer_id]) + language: jmespath +``` From 9d6287da148ebb500b6ba522dddb881cd6888e9f Mon Sep 17 00:00:00 2001 From: ilianiliev-redis Date: Mon, 11 May 2026 13:42:18 +0300 Subject: [PATCH 2/2] Apply suggestions from code review Co-authored-by: andy-stark-redis <164213578+andy-stark-redis@users.noreply.github.com> --- .../data-pipelines/data-denormalization.md | 2 +- .../data-pipelines/transform-examples/redis-write-same-key.md | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/content/integrate/redis-data-integration/data-pipelines/data-denormalization.md b/content/integrate/redis-data-integration/data-pipelines/data-denormalization.md index 4f28773471..f38cdd1ce6 100644 --- a/content/integrate/redis-data-integration/data-pipelines/data-denormalization.md +++ b/content/integrate/redis-data-integration/data-pipelines/data-denormalization.md @@ -105,7 +105,7 @@ In the example above, the `addresses` job uses the default key pattern to write You can also use custom keys for the parent entity, as long as you use the same key for all jobs that write to the same Redis key. {{< note >}} -In the case of using the same key for different jobs, deleting any of the entities will result in the key being remove from the target. +If you are using the same key for different jobs, deleting any of the entities will result in the key being removed from the target. For an example workaround, see [Write to the same key from multiple jobs]({{< relref "/integrate/redis-data-integration/data-pipelines/transform-examples/redis-write-same-key" >}}). {{< /note >}} diff --git a/content/integrate/redis-data-integration/data-pipelines/transform-examples/redis-write-same-key.md b/content/integrate/redis-data-integration/data-pipelines/transform-examples/redis-write-same-key.md index 32c56dbce0..ec89ae8f19 100644 --- a/content/integrate/redis-data-integration/data-pipelines/transform-examples/redis-write-same-key.md +++ b/content/integrate/redis-data-integration/data-pipelines/transform-examples/redis-write-same-key.md @@ -84,8 +84,8 @@ transform: expression: opcode == 'd' && 'u' || opcode language: jmespath - # If you have overlapping field names e.g. FK and PK have the same name, or both table have - # a field called "id" and may want to remove the field from the after object to prevent it + # If you have overlapping field names (for example, FK and PK have the same name, or both tables have + # a field called "id"), you may want to remove the field from the after object to prevent it # from overwriting the PK. - uses: remove_field with: