Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix analytics-engineer.md broken links #349

Merged
merged 1 commit into from
May 20, 2023
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions docs/website/docs/user-guides/analytics-engineer.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ keywords: [analytics engineer, analytics, business intelligence]

As a result, this data usually needs to be rearranged to get it to a structure that analysts and other business users can use to answer questions. For example, you will often want to consolidate this data into tables and entities that represent the business process makes it easier for downstream consumers.

To make this easier, `dlt` supports a dbt runner that allows you to create a virtual environment, install dbt on it, pass it credentials, and run a dbt package from a local or online location. You can read more about this [here](./using-loaded-data/transforming-the-data).
To make this easier, `dlt` supports a dbt runner that allows you to create a virtual environment, install dbt on it, pass it credentials, and run a dbt package from a local or online location. You can read more about this [here](../using-loaded-data/transforming-the-data).

### Contributing dbt packages

Expand All @@ -37,7 +37,7 @@ If you run into issues or have questions, please reach out in the `#technical-he

## Use Case #2: Clean, type, and customize how data is produced and loaded

`dlt` allows you to customize how data is produced, enabling you to rename, filter, and modify the data that arrives at your destination. Before you pass the data to `dlt` for normalization, you can transform the data in Python. After `dlt` has normalized and loaded the data, you can also further transform it using SQL. You can learn more about possible customizations in the docs (e.g. [pseudonymizing_columns](./customizations/customizing-pipelines/pseudonymizing_columns)).
`dlt` allows you to customize how data is produced, enabling you to rename, filter, and modify the data that arrives at your destination. Before you pass the data to `dlt` for normalization, you can transform the data in Python. After `dlt` has normalized and loaded the data, you can also further transform it using SQL. You can learn more about possible customizations in the docs (e.g. [pseudonymizing_columns](../customizations/customizing-pipelines/pseudonymizing_columns)).

## Use Case #3: Create your own pipelines in a declarative fashion

Expand All @@ -47,4 +47,4 @@ It features a declarative approach to configuring loading modes, handling all of

If you combine it with a `dbt` package, then you use it end to end to deliver analytics.

Read more about [running dbt packages](./using-loaded-data/transforming-the-data), [incremental loading](./general-usage/incremental-loading), and [performance hints and schemas](./walkthroughs/adjust-a-schema).
Read more about [running dbt packages](../using-loaded-data/transforming-the-data), [incremental loading](../general-usage/incremental-loading), and [performance hints and schemas](../walkthroughs/adjust-a-schema).