Skip to content
This repository has been archived by the owner on Apr 11, 2024. It is now read-only.

Commit

Permalink
Merge branch 'update-readme' of https://github.com/astronomer/apache-…
Browse files Browse the repository at this point in the history
  • Loading branch information
sunank200 committed Mar 27, 2023
2 parents 497aeaa + 1c49a77 commit 6a95b80
Showing 1 changed file with 4 additions and 4 deletions.
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,10 +9,10 @@ transfers made easy<br><br>

[![CI](https://github.com/astronomer/apache-airflow-provider-transfers/actions/workflows/ci-uto.yaml/badge.svg)](https://github.com/astronomer/apache-airflow-provider-transfers)

The **Universal Transfer Operator** allows data transfers between any supported source and target Datasets in [Apache Airflow](https://airflow.apache.org/). It offers a consistent agnostic interface, simplifying the users' experience, so they do not need to use specific providers or operators for transfers. The Astro Python SDK is maintained by [Astronomer](https://astronomer.io).
The **Universal Transfer Operator** allows data transfers between any supported source and target Datasets in [Apache Airflow](https://airflow.apache.org/). It offers a consistent agnostic interface, simplifying the users' experience, so they do not need to use specific providers or operators for transfers. The Universal Transfer Operator is maintained by [Astronomer](https://astronomer.io).

This ensures a consistent set of data providers that can read from and write to dataset. The Universal Transfer
Operator can use the respective data providers to transfer between as a source and a destination. It also takes advantage of any existing fast and
This ensures a consistent set of data providers that can read from and write to the dataset. The Universal Transfer
Operator can use the respective data providers to transfer between a source and a destination. It also takes advantage of any existing fast and
direct high-speed endpoints, such as Snowflake's built-in ``COPY INTO`` command to load S3 files efficiently into Snowflake.

Universal transfer operator also supports the transfers using third-party platforms like Fivetran.
Expand Down Expand Up @@ -52,7 +52,7 @@ Steps:
- Request destination dataset to ingest data from the file dataset.
- Destination dataset request source dataset for data.

This is a faster way for datasets of larger size as there is only one network call involved and usually the bandwidth between vendors is high. Also, there is no requirement for memory/processing power of the worker node, since data never gets on the node. There is significant performance improvement due to native transfers.
This is a faster way to transfer datasets of larger size as there is only one network call involved and usually the bandwidth between vendors is high. Also, there is no requirement for memory/processing power of the worker node, since data never gets on the node. There is significant performance improvement due to native transfers.

> **_NOTE:_**
Native implementation is in progress and will be added in upcoming releases.
Expand Down

0 comments on commit 6a95b80

Please sign in to comment.