Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
203 changes: 203 additions & 0 deletions docs/content/en/latest/pipelines/ldm_extension/_index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,203 @@
---
title: "LDM Extension"
linkTitle: "LDM Extension"
weight: 3
no_list: true
---

Child workspaces inherit [Logical Data Model](https://www.gooddata.com/docs/cloud/model-data/concepts/logical-data-model/) (LDM) from their parent. You can use GoodData Pipelines to extend child workspace's LDM with extra datasets specific to the tenant requirements.

{{% alert color="info" %}} See [Set Up Multiple Tenants](https://www.gooddata.com/docs/cloud/workspaces/) to learn more about leveraging multitenancy in GoodData.{{% /alert %}}

This documentation operates with terms like *custom datasets* and *custom fields*. Within this context, *custom* refers to extension of the LDM beyond inherited datasets.

## Usage

Start by initializing the LdmExtensionManager:

```python
from gooddata_pipelines import LdmExtensionManager

host = "http://localhost:3000"
token = "some_user_token"

ldm_extension_manager = LdmExtensionManager.create(host=host, token=token)

```

To extend the LDM, you need to define the custom datasets and the fields they should contain. The script also checks the validity of analytical objects before and after the update. Updates introducing new invalid relations are automatically rolled back. You can opt out of this behavior by setting the `check_relations` parameter to False.

### Custom Dataset Definitions

The custom dataset represents a new dataset appended to the child LDM. It is defined by the following parameters:

| name | type | description |
|------|------|-------------|
| workspace_id | string | ID of the child workspace. |
| dataset_id | string | ID of the custom dataset. |
| dataset_name | string | Name of the custom dataset. |
| dataset_datasource_id | string | ID of the data source. |
| dataset_source_table | string | Name of the table in the Physical Data Model. |
| dataset_source_sql | string \| None | SQL query defining the dataset. |
| parent_dataset_reference | string \| None | ID of the parent dataset to which the custom one will be connected. |
| parent_dataset_reference_attribute_id | string | ID of the attribute used for creating the relationship in the parent dataset. |
| dataset_reference_source_column | string | Name of the column used for creating the relationship in the custom dataset. |
| dataset_reference_source_column_data_type | [ColumnDataType](#columndatatype) | Column data type. |
| workspace_data_filter_id | string | ID of the workspace data filter to use. |
| workspace_data_filter_column_name | string | Name of the column in custom dataset used for filtering. |

#### Validity constraints

Either `dataset_source_table` or `dataset_source_sql` must be specified with a truthy value, but not both. An exception will be raised if both parameters are falsy or if both have truthy values.

### Custom Field Definitions

The custom fields define the individual fields in the custom datasets defined above. Each custom field needs to be specified with the following parameters:

| name | type | description |
|---------------|----------|-----------------|
| workspace_id | string | ID of the child workspace. |
| dataset_id | string | ID of the custom dataset. |
| custom_field_id | string | ID of the custom field. |
| custom_field_name | string | Name of the custom field. |
| custom_field_type | [CustomFieldType](#customfieldtype) | Indicates whether the field represents an attribute, a date, or a fact. |
| custom_field_source_column | string | Name of the column in the physical data model. |
| custom_field_source_column_data_type | [ColumnDataType](#columndatatype) | Data type of the field. |

#### Validity constraints

The custom field definitions must comply with the following criteria:

- Each attribute and fact must have a unique combination of `workspace_id` and `custom_field_id` values.
- Each date must have a unique combination of `dataset_id` and `custom_field_id` values.

### Enumerations

Some parameters of custom fields' and datasets' definitions are specified via CustomFieldType and ColumnDataType enums.

#### CustomFieldType

The following field types are supported:

| name | value |
|------|-------|
| ATTRIBUTE | "attribute" |
| FACT | "fact" |
| DATE | "date" |

#### ColumnDataType

The following data types are supported:

| name | value |
|------|-------|
| INT | "INT" |
| STRING | "STRING" |
| DATE | "DATE" |
| NUMERIC | "NUMERIC" |
| TIMESTAMP | "TIMESTAMP" |
| TIMESTAMP_TZ | "TIMESTAMP_TZ" |
| BOOLEAN | "BOOLEAN" |

### Relations Check

As changes to the LDM may impact existing analytical objects, the script will perform checks to prevent potentially undesirable changes.

{{% alert color="warning" %}} Changes to the LDM can invalidate existing objects. For example, removing a previously added custom field will break any analytical objects using that field. {{% /alert %}}

To prevent this, the script will:

1. Store current workspace layout (analytical objects and LDM).
1. Check whether relations of metrics, visualizations and dashboards are valid. A set of current objects with invalid relations is created.
1. Push the updated LDM to GoodData Cloud.
1. Check object relations again. New set of objects with invalid relations is created.
1. The sets are compared:
- If the new set is a subset of the old one, the update is considered successful.
- Otherwise, the update is rolled back. The initially stored workspace layout will be pushed to GoodData Cloud again, reverting changes to the workspace.

You can opt out of this check and rollback behavior by setting `check_relations` parameter to `False` when using the LdmExtensionManager.

```python
# By setting the `check_relations` to False, you will bypass the default checks
# and rollback mechanism. Note that this may invalidate existing objects.
ldm_extension_manager.process(
custom_datasets=custom_dataset_definitions,
custom_fields=custom_field_definitions,
check_relations=False,
)

```

## Example

Here is a complete example of extending a child workspace's LDM:

```python
from gooddata_pipelines import (
ColumnDataType,
CustomDatasetDefinition,
CustomFieldDefinition,
CustomFieldType,
LdmExtensionManager,
)

import logging

logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)

host = "http://localhost:3000"
token = "some_user_token"

# Initialize the manager
ldm_extension_manager = LdmExtensionManager.create(host=host, token=token)

# Optionally, you can subscribe to the logger object to receive log messages
ldm_extension_manager.logger.subscribe(logger)

# Prepare the definitions
custom_dataset_definitions = [
CustomDatasetDefinition(
workspace_id="child_workspace_id",
dataset_id="products_custom_dataset_id",
dataset_name="Custom Products Dataset",
dataset_datasource_id="gdc_datasource_id",
dataset_source_table="products_custom",
dataset_source_sql=None,
parent_dataset_reference="products",
parent_dataset_reference_attribute_id="products.product_id",
dataset_reference_source_column="product_id",
dataset_reference_source_column_data_type=ColumnDataType.INT,
workspace_data_filter_id="wdf_id",
workspace_data_filter_column_name="wdf_column",
)
]

custom_field_definitions = [
CustomFieldDefinition(
workspace_id="child_workspace_id",
dataset_id="products_custom_dataset_id",
custom_field_id="is_sold_out",
custom_field_name="Sold Out",
custom_field_type=CustomFieldType.ATTRIBUTE,
custom_field_source_column="is_sold_out",
custom_field_source_column_data_type=ColumnDataType.BOOLEAN,
),
CustomFieldDefinition(
workspace_id="child_workspace_id",
dataset_id="products_custom_dataset_id",
custom_field_id="category_detail",
custom_field_name="Category (Detail)",
custom_field_type=CustomFieldType.ATTRIBUTE,
custom_field_source_column="category_detail",
custom_field_source_column_data_type=ColumnDataType.STRING,
),
]

# Call the process method to extend the LDM
ldm_extension_manager.process(
custom_datasets=custom_dataset_definitions,
custom_fields=custom_field_definitions,
)

```
10 changes: 9 additions & 1 deletion gooddata-pipelines/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,4 +57,12 @@ full_load_data: list[UserFullLoad] = UserFullLoad.from_list_of_dicts(
provisioner.full_load(full_load_data)
```

Ready-made scripts covering the basic use cases can be found here in the [GoodData Productivity Tools](https://github.com/gooddata/gooddata-productivity-tools) repository
## Bugs & Requests

Please use the [GitHub issue tracker](https://github.com/gooddata/gooddata-python-sdk/issues) to submit bugs
or request features.

## Changelog

See [Github releases](https://github.com/gooddata/gooddata-python-sdk/releases) for released versions
and a list of changes.
14 changes: 14 additions & 0 deletions gooddata-pipelines/gooddata_pipelines/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,15 @@
from .backup_and_restore.storage.local_storage import LocalStorage
from .backup_and_restore.storage.s3_storage import S3Storage

# -------- LDM Extension --------
from .ldm_extension.ldm_extension_manager import LdmExtensionManager
from .ldm_extension.models.custom_data_object import (
ColumnDataType,
CustomDatasetDefinition,
CustomFieldDefinition,
CustomFieldType,
)

# -------- Provisioning --------
from .provisioning.entities.user_data_filters.models.udf_models import (
UserDataFilterFullLoad,
Expand Down Expand Up @@ -65,5 +74,10 @@
"UserDataFilterProvisioner",
"UserDataFilterFullLoad",
"EntityType",
"LdmExtensionManager",
"CustomDatasetDefinition",
"CustomFieldDefinition",
"ColumnDataType",
"CustomFieldType",
"__version__",
]
50 changes: 50 additions & 0 deletions gooddata-pipelines/gooddata_pipelines/api/gooddata_api.py
Original file line number Diff line number Diff line change
Expand Up @@ -174,6 +174,44 @@ def get_automations(self, workspace_id: str) -> requests.Response:
)
return self._get(endpoint)

def get_all_metrics(self, workspace_id: str) -> requests.Response:
"""Get all metrics from the specified workspace.

Args:
workspace_id (str): The ID of the workspace to retrieve metrics from.
Returns:
requests.Response: The response containing the metrics.
"""
endpoint = f"/entities/workspaces/{workspace_id}/metrics"
headers = {**self.headers, "X-GDC-VALIDATE-RELATIONS": "true"}
return self._get(endpoint, headers=headers)

def get_all_visualization_objects(
self, workspace_id: str
) -> requests.Response:
"""Get all visualizations from the specified workspace.

Args:
workspace_id (str): The ID of the workspace to retrieve visualizations from.
Returns:
requests.Response: The response containing the visualizations.
"""
endpoint = f"/entities/workspaces/{workspace_id}/visualizationObjects"
headers = {**self.headers, "X-GDC-VALIDATE-RELATIONS": "true"}
return self._get(endpoint, headers=headers)

def get_all_dashboards(self, workspace_id: str) -> requests.Response:
"""Get all dashboards from the specified workspace.

Args:
workspace_id (str): The ID of the workspace to retrieve dashboards from.
Returns:
requests.Response: The response containing the dashboards.
"""
endpoint = f"/entities/workspaces/{workspace_id}/analyticalDashboards"
headers = {**self.headers, "X-GDC-VALIDATE-RELATIONS": "true"}
return self._get(endpoint, headers=headers)

def _get(
self, endpoint: str, headers: dict[str, str] | None = None
) -> requests.Response:
Expand Down Expand Up @@ -253,3 +291,15 @@ def _delete(
url = self._get_url(endpoint)

return requests.delete(url, headers=self.headers, timeout=TIMEOUT)

@staticmethod
def raise_if_response_not_ok(*responses: requests.Response) -> None:
"""Check if responses from API calls are OK.

Raises ValueError if any response is not OK (status code not 2xx).
"""
for response in responses:
if not response.ok:
raise ValueError(
f"Request to {response.url} failed with status code {response.status_code}: {response.text}"
)
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
# (C) 2025 GoodData Corporation
Loading
Loading