New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can you add a slightly more realistic example of a data pipeline in the cloud? #55
Comments
We have an operator that transfers files from s3 to Hive, as well as a Hive to Mysql operator, which is unfortunately missing from the documentation. The hooks being already in place, a s3 to Postgres operator should not be too difficult. Here is an example of how our s3 to Hive operator would work :
Usually, the transfer is a separate operation of any transform and has its own operator. |
Agreed, the current tutorial is really just focused on the mechanics of Airflow with very foobar-y examples. I didn't want to write a pipeline that was too stack specific (MySQL / Hive / ...) and wanted to make sure it would work for anyone, regardless of the stack they might have. Maybe using a SqliteOperator to do some analytics on some data scraped from the Internet would be a good example. It could be interesting to re-write the Luigi example for comparison :) But yeah, it's on the TODO list. |
Cool. I'll close this for now. |
Signed-off-by: wslulciuc <willy@datakin.com>
For example, the Luigi example walks through a case that involves importing data into a DB. It would be cool if there were some examples that read from one location (e.g. S3) and wrote to another (e.g. DB).
The text was updated successfully, but these errors were encountered: