diff --git a/src/_data/sidenav/main.yml b/src/_data/sidenav/main.yml index df0fff781f..c1b1f0ac97 100644 --- a/src/_data/sidenav/main.yml +++ b/src/_data/sidenav/main.yml @@ -362,6 +362,8 @@ sections: title: BigQuery Data Graph Setup - path: /unify/data-graph/setup-guides/databricks-setup/ title: Databricks Data Graph Setup + - path: /unify/data-graph/setup-guides/redshift-setup/ + title: Redshift Data Graph Setup - path: /unify/data-graph/setup-guides/snowflake-setup/ title: Snowflake Data Graph Setup - section_title: Linked Events diff --git a/src/connections/destinations/index.md b/src/connections/destinations/index.md index 4ef6b2da3f..37bc1759b7 100644 --- a/src/connections/destinations/index.md +++ b/src/connections/destinations/index.md @@ -230,7 +230,6 @@ Segment supports IP Allowlisting in [all destinations](/docs/connections/destina - [LiveRamp](/docs/connections/destinations/catalog/actions-liveramp-audiences/) - [TradeDesk](/docs/connections/destinations/catalog/actions-the-trade-desk-crm/) - [Amazon Kinesis](/docs/connections/destinations/catalog/amazon-kinesis/) -- [Destination Functions](/docs/connections/functions/destination-functions/) Destinations that are not supported receive traffic from randomly assigned IP addresses. diff --git a/src/connections/functions/index.md b/src/connections/functions/index.md index e173effb0e..637420393f 100644 --- a/src/connections/functions/index.md +++ b/src/connections/functions/index.md @@ -46,4 +46,12 @@ To learn more, visit [destination insert functions](/docs/connections/functions/ With Functions Copilot, you can instrument custom integrations, enrich and transform data, and even secure sensitive data nearly instantaneously without writing a line of code. -To learn more, visit the [Functions Copilot documentation](/docs/connections/functions/copilot/). \ No newline at end of file +To learn more, visit the [Functions Copilot documentation](/docs/connections/functions/copilot/). + +#### IP Allowlisting + +IP Allowlisting uses a NAT gateway to route outbound Functions traffic from Segment’s servers to your destinations through a limited range of IP addresses, which can prevent malicious actors from establishing TCP and UDP connections with your integrations. + +IP Allowlisting is available for customers on Business Tier plans. + +To learn more, visit [Segment's IP Allowlisting documentation](/docs/connections/destinations/#ip-allowlisting). \ No newline at end of file diff --git a/src/engage/journeys/event-triggered-journeys.md b/src/engage/journeys/event-triggered-journeys.md index 24de8a47e2..9b4010a2b8 100644 --- a/src/engage/journeys/event-triggered-journeys.md +++ b/src/engage/journeys/event-triggered-journeys.md @@ -10,8 +10,8 @@ Unlike traditional audience-based journeys that rely on pre-defined user segment On this page, you'll learn how to create an event-triggered journey, configure entry conditions, and work with published event-triggered journeys. -> info "Private Beta" -> Event-Triggered Journeys is in private beta, and Segment is actively working on this feature. Some functionality may change before it becomes generally available. During private beta, Event-Triggered Journeys is not HIPAA eligible. +> info "Public Beta" +> Event-Triggered Journeys is in public beta, and Segment is actively working on this feature. Some functionality may change before it becomes generally available. Event-Triggered Journeys is not currently HIPAA eligible. ## Overview diff --git a/src/engage/journeys/journey-context.md b/src/engage/journeys/journey-context.md index 5798c6d3ed..4466399871 100644 --- a/src/engage/journeys/journey-context.md +++ b/src/engage/journeys/journey-context.md @@ -8,8 +8,8 @@ hidden: true This page explains Journey context, which can help you dynamically adapt each journey to individual user interactions, creating highly relevant, real-time workflows. -> info "Private Beta" -> Event-Triggered Journeys is in private beta, and Segment is actively working on this feature. Some functionality may change before it becomes generally available. During private beta, Event-Triggered Journeys is not HIPAA eligible. +> info "Public Beta" +> Event-Triggered Journeys is in public beta, and Segment is actively working on this feature. Some functionality may change before it becomes generally available. Event-Triggered Journeys is not currently HIPAA eligible. ## Overview @@ -17,22 +17,27 @@ Unlike traditional audience-based journeys, which rely solely on user progress t With journey context, you can: -- Split journeys based on event attributes or outcomes. + - Personalize customer experiences using real-time event data. - Enable advanced use cases like abandonment recovery, dynamic delays, and more. +For example: + +- When a user cancels an appointment, send a message that includes the time and location of the appointment they just canceled. +- When a user abandons a cart, send a message that includes the current contents of their cart. + ## What is Journey context? Journey context is a flexible data structure that captures key details about the events and conditions that shape a customer’s journey. Journey context provides a point-in-time snapshot of event properties, making accurate and reliable data available throughout the journey. -Journey context stores: -- **Event properties**: Information tied to specific user actions, like `Appointment ID` or `Order ID`. -- **Split evaluations**: Results of branch decisions made during the journey, enabling future steps to reference these outcomes. +Journey context stores event property information tied to specific user actions, like `Appointment ID` or `Order ID`. Journey context doesn't store: - **Profile traits**, which may change over time. - **Audience memberships**, which can evolve dynamically. +However, the up-to-date values of profile traits and audience membership can be added in a payload sent to a destination. + This focused approach ensures journey decisions are always based on static, reliable data points. ### Examples of stored context @@ -49,7 +54,9 @@ Event properties are the foundation of Journey context. Examples of event proper - `Order ID` - An array of cart contents -Segment captures each event’s properties as a point-in-time snapshot when the event occurs, ensuring that the data remains consistent for use in personalization, branching, and other advanced workflow steps. +Segment captures each event’s properties as a point-in-time snapshot when the event occurs, ensuring that the data remains consistent for use in personalization. + + ## Using Journey context in Event-Triggered Journeys @@ -59,7 +66,7 @@ This is useful for scenarios like: - **Abandonment recovery:** Checking whether a user completed a follow-up action, like a purchase. - **Customizing messages:** Using event properties to include relevant details in communications. -- **Scheduling workflows:** Triggering actions based on contextual data, like the time of a scheduled appointment. + By incorporating event-specific data at each step, journey context helps workflows remain relevant and adaptable to user actions. @@ -67,35 +74,39 @@ By incorporating event-specific data at each step, journey context helps workflo Journey context gets referenced and updated at various steps in an event-triggered journey. Each step plays a specific role in adapting the journey to user behavior or conditions. -#### Wait for event split +#### Hold Until split This step checks whether a user performs a specific event within a given time window. If the event occurs, Segment adds its details to journey context for use in later steps. -For example, a journey may wait to see if a `checkout_completed` event occurs within two hours of a user starting checkout. If the event happens, the workflow can proceed; otherwise, it may take an alternate path. The data captured includes event properties (like `Order ID`) and the results of the split evaluation. +For example, a journey may wait to see if a `checkout_completed` event occurs within two hours of a user starting checkout. If the event happens, its properties are added to context and the workflow can proceed; otherwise, it may take an alternate path. The data captured includes event properties (like `Order ID`). -#### Context split + -This step evaluates conditions using data already stored in journey context. Based on the conditions, users are routed to different branches of the journey. +If a Hold Until branch is set to send profiles back to the beginning of the step when the event is performed, those events are also captured in context. Because they may or may not be performed during a journey, they will show as available in future steps but will not be guaranteed for every user's progression through the journey. -For example, a user who triggers an event with a property like `order_value > 100` might be routed to one branch, while other users follow a different path. The split uses attributes from journey context, like event properties or prior split outcomes, to determine the appropriate branch. + #### Send to destination @@ -107,7 +118,9 @@ For example, a payload sent to a messaging platform might include `Order ID` and The structure of journey context organizes event-specific data gets and makes it accessible throughout the journey workflow. By standardizing how data is stored, Segment makes it easier to reference, use, and send this information at different stages of a journey. -Journey context is organized as a collection of key-value pairs, where each key represents a data point or category, and its value holds the associated data. This structure supports various types of information, like event properties, split outcomes, and function outputs. +Journey context is organized as a collection of key-value pairs, where each key represents a data point or category, and its value holds the associated data. + + For example, when a user triggers an event like `Appointment Scheduled`, Segment stores its properties (like `Appointment ID`, `Appointment Start Time`) as key-value pairs. You can then reference these values in later journey steps or include them in external payloads. @@ -115,32 +128,34 @@ The following example shows how journey context might look during a workflow. In ```json { - "appointment_scheduled": { - "appointment_id": "12345", - "start_time": "2024-12-06T10:00:00Z", - "end_time": "2024-12-06T11:00:00Z", - "provider_name": "Dr. Smith" - }, - "split_decision": { - "split_name": "appointment_type_split", - "branch_chosen": "existing_patient" - }, - "function_output": { - "discount_percentage": 15 + "journey_context": { + "appointment_scheduled": { + "appointment_id": 12345, + "start_time": "2024-12-06T10:00:00Z", + "end_time": "2024-12-06T11:00:00Z", + "provider_name": "Dr. Smith" + }, + "appointment_rescheduled": { + "appointment_id": 12345, + "start_time": "2024-12-07T10:00:00Z", + "end_time": "2024-12-07T11:00:00Z", + "provider_name": "Dr. Jameson" + } } } ``` This payload contains: -- **Event properties**: Captured under the `appointment_scheduled` key. -- **Split outcomes**: Documented in the `split_decision` object. -- **Function results**: Stored in the `function_output` object for use in later steps. +- **Entry Event properties**: Captured under the `appointment_scheduled` key. +- **Hold Until Event properties**: Captured under the `appointment_rescheduled` key. ## Journey context and Event-Triggered Journeys Journey context underpins the flexibility and precision of Event-Triggered Journeys. By capturing key details about events and decisions as they happen, journey context lets workflows respond dynamically to user actions and conditions. -Whether you're orchestrating real-time abandonment recovery, scheduling contextual delays, or personalizing messages with event-specific data, journey context provides the tools to make your workflows more relevant and effective. +Whether you're orchestrating real-time abandonment recovery or personalizing messages with event-specific data, journey context provides the tools to make your workflows more relevant and effective. + +To learn more about how Event-Triggered Journeys work, visit the [Event-Triggered Journeys documentation](/docs/engage/journeys/event-triggered-journeys/). -To learn more about how Event-Triggered Journeys work, visit the [Event-Triggered Journeys documentation](/docs/engage/journeys/event-triggered-journeys/). \ No newline at end of file + \ No newline at end of file diff --git a/src/unify/data-graph/setup-guides/databricks-setup.md b/src/unify/data-graph/setup-guides/databricks-setup.md index 2303bb3594..202c0a6956 100644 --- a/src/unify/data-graph/setup-guides/databricks-setup.md +++ b/src/unify/data-graph/setup-guides/databricks-setup.md @@ -1,5 +1,5 @@ --- -title: Databricks Setup +title: Databricks Data Graph Setup plan: unify redirect_from: - '/unify/linked-profiles/setup-guides/databricks-setup' diff --git a/src/unify/data-graph/setup-guides/redshift-setup.md b/src/unify/data-graph/setup-guides/redshift-setup.md index a6da05fd3e..167376e28a 100644 --- a/src/unify/data-graph/setup-guides/redshift-setup.md +++ b/src/unify/data-graph/setup-guides/redshift-setup.md @@ -2,71 +2,121 @@ title: Redshift Data Graph Setup beta: true plan: unify -hidden: true redirect_from: - '/unify/linked-profiles/setup-guides/redshift-setup' --- -> info "Linked Audiences is in public beta" -> Linked Audiences (with Data Graph, Linked Events) is in public beta, and Segment is actively working on this feature. Some functionality may change before it becomes generally available. - > info "" -> At this time, you can only use Redshift with Linked Events. +> Redshift for Data Graph is in beta and Segment is actively working on this feature. Some functionality may change before it becomes generally available. This feature is governed by Twilio Segment’s [First Access and Beta Preview Terms](https://www.twilio.com/en-us/legal/tos){:target="_blank"}. + +Set up your Redshift data warehouse to Segment for the [Data Graph](/docs/unify/data-graph/). + +## Prerequisite -On this page, you'll learn how to connect your Redshift data warehouse to Segment. +To use Linked Audiences with Redshift, the Data Graph only supports [materialized views](/docs/unify/profiles-sync/tables/#tables-segment-materializes). + +If you're setting up Profiles Sync for the first time in the Unify space, go through the setup flow for Selective sync. If Profiles Sync is already set up for your Unify space, follow these steps to configure Profiles Sync for your Unify space: + +1. Navigate to **Unify > Profile Sync**. +2. Select the **Settings** tab and select **Selective sync**. +3. Select all the tables under **Profile raw tables**. These include, `external_id_mapping_updates`, `id_graph_updates`, `profile_traits_updates`. Linked Audiences require Profile Sync to be configured such that both the Profile raw tables and the Profile materialized tables are synchronized with your Redshift instance. +4. Select all of the tables under **Profile materialized tables**. These include `profile_merges`, `user_traits`, `user_identifiers`. This allows faster and more cost-efficient Linked Audiences computations in your data warehouse. +5. Select **Sync all Track Call Tables** under **Track event tables** to enable filtering on event history for Linked Audiences conditions. ## Getting started +You need to be an AWS Redshift account admin to set up the Segment Redshift connector as well as write permissions for the `__segment_reverse_etl` dataset. + To get started with Redshift: 1. Log in to Redshift and select the Redshift cluster you want to connect. -2. Follow these [networking instructions](/docs/connections/storage/catalog/redshift/#networking) to configure network and security settings. +2. Follow the [networking instructions](/docs/connections/storage/catalog/redshift/#networking) to configure network and security settings. -## Create a new role and user +## Step 1: Roles and permissions +Segment recommends you to create a new Redshift user and role with only the required permissions. -Run the SQL commands below to create a role (`segment_entities`) and user (`segment_entities_user`). +Create a new role and user for the Segment Data Graph. This new role will only have access to the datasets you provide access to for the Data Graph. Run the SQL commands in your Redshift cluster: -```sql --- create role -CREATE ROLE segment_entities; + ```sql + -- Create a user with role for the Data Graph + CREATE ROLE SEGMENT_LINKED_ROLE; + CREATE USER SEGMENT_LINKED_USER PASSWORD "your_password"; + GRANT ROLE SEGMENT_LINKED_ROLE TO SEGMENT_LINKED_USER; + ``` + +## Step 2: Create a database for Segment to store checkpoint tables + +> info "" +> Segment recommends you to create a new database for the Data Graph. If you choose to use an existing database that has also been used for [Segment Reverse ETL](/docs/connections/reverse-etl/), you must follow the [additional instructions](#update-user-access-for-segment-reverse-etl-dataset) to update user access for the Segment Reverse ETL schema. + +Provide write access to the database as Segment requires this in order to create a schema for internal bookkeeping and to store checkpoint tables for the queries that are executed. Segment recommends you to create a new database for this purpose. This is also the database you'll be required to specify for the **Database Name** when connecting Redshift with the Segment app. --- allow the role to create new schemas on specified database. (This is the name you chose when provisioning your cluster) -GRANT CREATE ON DATABASE "" TO ROLE segment_entities; +Run the following SQL commands in your Redshift cluster: --- create a user named "segment_entities_user" that Segment will use when connecting to your Redshift cluster. -CREATE USER segment_entities_user PASSWORD ''; +```sql +-- Create and Grant access to a Segment internal DB used for bookkeeping --- grant role permissions to the user -GRANT ROLE segment_entities TO segment_entities_user; +CREATE DATABASE SEGMENT_LINKED_PROFILES_DB; +GRANT CREATE ON DATABASE SEGMENT_LINKED_PROFILES_DB TO ROLE SEGMENT_LINKED_ROLE; ``` -## Grant access to schemas and tables +## Step 3: Grant read-only access for the Data Graph +Grant the Segment role read-only access to additional schemas you want to use for the Data Graph including the Profiles Sync database. -You'll need to grant access to schemas and tables that you'd like to enrich with. This allows Segment to list schemas, tables, and columns, as well as create entities with data extracted and ingested to Segment. +To locate the Profile Sync database, navigate to **Unify > Profiles Sync > Settings > Connection Settings**. You will see the database and schema name. ### Schemas +Grant schema permissions based on customer need. See Amazon’s docs to view [schema permissions](https://docs.aws.amazon.com/redshift/latest/dg/r_GRANT.html){:target="_blank"} and [example commands](https://docs.aws.amazon.com/redshift/latest/dg/r_GRANT-examples.html){:target="_blank"} that you can use to grant permissions. Repeat the following SQL query for each schema you want to use for the Data Graph. -Grant schema permissions based on customer need. Visit Amazon's docs to view [schema permissions](https://docs.aws.amazon.com/redshift/latest/dg/r_GRANT.html){:target="_blank"} and [example commands](https://docs.aws.amazon.com/redshift/latest/dg/r_GRANT-examples.html){:target="_blank"} that you can use to grant permissions. +```sql +-- ********** REPEAT THE SQL QUERY BELOW FOR EACH SCHEMA YOU WANT TO USE FOR THE DATA GRAPH ********** -```ts --- view specific schemas in database -GRANT USAGE ON SCHEMA TO ROLE segment_entities; +GRANT USAGE ON SCHEMA "the_schema_name" TO ROLE SEGMENT_LINKED_ROLE; ``` -### Tables +### Table +Grant table permissions based on your needs. Learn more about [Amazon’s table permissions](https://docs.aws.amazon.com/redshift/latest/dg/r_GRANT.html){:target="_blank"}. + +Table permissions can either be handled in bulk: + +```sql +-- query data from all tables in a schema +GRANT SELECT ON ALL TABLES IN SCHEMA "the_schema_name" TO ROLE SEGMENT_LINKED_ROLE; +``` -Grant table permissions based on customer need. Learn more about Amazon's [table permissions](https://docs.aws.amazon.com/redshift/latest/dg/r_GRANT.html){:target="_blank"}. +Or in a more granular fashion if needed: -```ts +```sql -- query data from a specific table in a schema -GRANT SELECT ON TABLE . TO ROLE segment_entities; +GRANT SELECT ON TABLE . TO ROLE segment_linked_role; ``` -### RETL table permissions +## Step 4: Validate permissions +To verify you have set up the right permissions for a specific table, log in with the username and password you created for `SEGMENT_LINKED_USER` and run the following command to verify the role you created has the correct permissions. If this command succeeds, you should be able to view the respective table. -If you used RETL in your database, you'll need to add the following [table permissions](https://docs.aws.amazon.com/redshift/latest/dg/r_GRANT.html){:target="_blank"}: +```sql +SHOW SCHEMAS FROM DATABASE "THE_READ_ONLY_DB"; +SELECT * FROM "THE_READ_ONLY_DB.A_SCHEMA.SOME_TABLE" LIMIT 10; +``` -```ts -GRANT USAGE, CREATE ON SCHEMA __segment_reverse_etl TO ROLE segment_entities; +## Step 5: Connect your warehouse to Segment +To connect your warehouse to Segment: +1. Navigate to **Unify > Data Graph**. This should be a Unify space with Profiles Sync already set up. +2. Click **Connect warehouse**. +3. Select **Redshift** as your warehouse type. +4. Enter your warehouse credentials. Segment requires the following settings to connect to your Redshift warehouse: + * **Host Name:** The Redshift URL + * **Port:** The Redshift connection port + * **Database:** The only database that Segment requires write access to in order to create tables for internal bookkeeping. This database is referred to as `segment_linked_profiles_db` in the SQL above. + * **Username:** The Redshift user that Segment uses to run SQL in your warehouse. This user is referred to as `segment_linked_user` in the SQL above. + * **Password:** The password of the user above +5. Test your connection, then click **Save**. + +## Update user access for Segment Reverse ETL dataset +If Segment Reverse ETL ran in the project you are configuring as the Segment connection project, a Segment-managed dataset is already created, and you need to provide the new Segment user access to the existing dataset. Run the following SQL if you run into an error on the Segment app indicating that the user doesn’t have sufficient privileges on an existing `__segment_reverse_etl`: + +```sql +-- If you want to use an existing database that already has Segment Reverse ETL schemas, you’ll need to run some additional steps below to grant the role access to the existing schemas. -GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA __segment_reverse_etl TO ROLE segment_entities; +GRANT USAGE, CREATE, DROP ON SCHEMA segment_connection_db.__segment_reverse_etl TO ROLE SEGMENT_LINKED_ROLE; +GRANT SELECT,INSERT,UPDATE,DELETE,DROP ON ALL TABLES IN SCHEMA segment_connection_db.__segment_reverse_etl TO ROLE SEGMENT_LINKED_ROLE; ```