Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -104,6 +104,12 @@ In the example above, the `addresses` job uses the default key pattern to write

You can also use custom keys for the parent entity, as long as you use the same key for all jobs that write to the same Redis key.

{{< note >}}
If you are using the same key for different jobs, deleting any of the entities will result in the key being removed from the target.
For an example workaround, see [Write to the same key from multiple jobs]({{< relref "/integrate/redis-data-integration/data-pipelines/transform-examples/redis-write-same-key" >}}).
{{< /note >}}


## Joining one-to-many relationships

To join one-to-many relationships, you can use the *Nesting* strategy.
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,102 @@
---
Title: Write to the same key from multiple jobs
alwaysopen: false
categories:
- docs
- integrate
- rs
- rdi
description: null
group: di
linkTitle: Write to the same key
summary: Redis Data Integration keeps Redis in sync with the primary database in near
real time.
type: integration
weight: 100
---

Use this pattern when two or more jobs write related source entities, such as
`customer` and `address`, to the same Redis JSON document.

When multiple jobs write to the same Redis key, a delete event from any of the
source entities can delete the key from the target. To work around this, use
[`row_format: full`]({{< relref "/integrate/redis-data-integration/data-pipelines/transform-examples/redis-row-format#full" >}})
so the job can inspect the
[`opcode`]({{< relref "/integrate/redis-data-integration/data-pipelines/transform-examples/redis-opcode-example" >}}),
convert delete events into update events before writing to Redis, and write JSON
documents with `on_update: merge`.

{{< note >}}
Use the same key expression in all jobs that write to the shared Redis key. For
delete events, read key values from `before` or `key` because `after` is `null`.
{{< /note >}}

## Customer job

```yaml
# jobs/customer.yaml
name: customers

source:
table: customers

output:
- uses: redis.write
with:
data_type: json
on_update: merge
key:
expression: concat(['customer:', id])
language: jmespath

```

## Address job

For delete events from the `addresses` table, this job sets all fields to `null`
to instruct RDI to remove them from the target JSON document. This behavior is
available in RDI 1.15.0 or later when native JSON merge is enabled and the target
database uses RedisJSON 2.6.0 or later.

```yaml
# jobs/addresses.yaml
name: addresses

source:
table: addresses
row_format: full

transform:
- uses: add_field
with:
fields:
# For create/update records, we take the new values as is.
# If the record is a deletion, we set all fields to null.
- field: after
expression: |
(opcode != 'd' && after)
||
from_entries(to_entries(before)[].{key: key, value: `null`})
language: jmespath

# Treat deletes as updates so that we can use the same output configuration
- field: opcode
expression: opcode == 'd' && 'u' || opcode
language: jmespath

# If you have overlapping field names (for example, FK and PK have the same name, or both tables have
# a field called "id"), you may want to remove the field from the after object to prevent it
# from overwriting the PK.
- uses: remove_field
with:
field: after.id

output:
- uses: redis.write
with:
data_type: json
on_update: merge
key:
expression: concat(['customer:', after.customer_id || before.customer_id])
language: jmespath
```
Loading