Skip to content
Switch branches/tags

Latest commit

depends on #4500

prior context:

**User-facing changes:**

(1) Deprecating `output_notebook` in favor of `output_notebook_name`. 

- Using the old property `output_notebook` would require "file_manager" and result in a FileHandle output - no breaking change to existing users.
- Using the new property `output_notebook_name` would result in a bytes output and require "output_notebook_io_manager", see details in (2)

(2) With the new param, it requires a dedicated IO manager for output notebook. When `output_notebook` or `output_notebook_name` is specified, "output_notebook_io_manager" is required as a resource.

- we provide a built-in `local_output_notebook_io_manager` for handling local output notebook materialization.
- **New capabilities:** Users now can customize their own "output_notebook_io_manager" by extending `OutputNotebookIOManager`. This enables use cases:
    - users who want better asset key control can override `get_output_asset_key`.
    - users who want precise IO control over the output notebook can customize their own `handle_output` and `load_input`, e.g. they can control the file name of the output notebook.
    - users who want to attach more meaningful metadata can yield EventMetadataEntry in their own `handle_output` method

this is also fixing #3380
b678a82 1

Git stats


Failed to load latest commit information.
Latest commit message
Commit time


Dagster is a data orchestrator for machine learning, analytics, and ETL

Dagster lets you define pipelines in terms of the data flow between reusable, logical components, then test locally and run anywhere. With a unified view of pipelines and the assets they produce, Dagster can schedule and orchestrate Pandas, Spark, SQL, or anything else that Python can invoke.

Dagster is designed for data platform engineers, data engineers, and full-stack data scientists. Building a data platform with Dagster makes your stakeholders more independent and your systems more robust. Developing data pipelines with Dagster makes testing easier and deploying faster.

Develop and test on your laptop, deploy anywhere

With Dagster’s pluggable execution, the same pipeline can run in-process against your local file system, or on a distributed work queue against your production data lake. You can set up Dagster’s web interface in a minute on your laptop, or deploy it on-premise or in any cloud.

Model and type the data produced and consumed by each step

Dagster models data dependencies between steps in your orchestration graph and handles passing data between them. Optional typing on inputs and outputs helps catch bugs early.

Link data to computations

Dagster’s Asset Manager tracks the data sets and ML models produced by your pipelines, so you can understand how they were generated and trace issues when they don’t look how you expect.

Build a self-service data platform

Dagster helps platform teams build systems for data practitioners. Pipelines are built from shared, reusable, configurable data processing and infrastructure components. Dagster’s web interface lets anyone inspect these objects and discover how to use them.

Avoid dependency nightmares

Dagster’s repository model lets you isolate codebases so that problems in one pipeline don’t bring down the rest. Each pipeline can have its own package dependencies and Python version. Pipelines run in isolated processes so user code issues can't bring the system down.

Debug pipelines from a rich UI

Dagit, Dagster’s web interface, includes expansive facilities for understanding the pipelines it orchestrates. When inspecting a pipeline run, you can query over logs, discover the most time consuming tasks via a Gantt chart, re-execute subsets of steps, and more.

Getting Started


pip install dagster dagit

This installs two modules:

  • Dagster: the core programming model and abstraction stack; stateless, single-node, single-process and multi-process execution engines; and a CLI tool for driving those engines.
  • Dagit: the UI for developing and operating Dagster pipelines, including a DAG browser, a type-aware config editor, and a live execution interface.

Hello dagster 👋

from dagster import pipeline, solid

def get_name():
    return "dagster"

def hello(context, name):"Hello, {name}!")

def hello_pipeline():

Save the code above in a file named You can execute the pipeline using any one of the following methods:

(1) Dagster Python API

from dagster import execute_pipeline

if __name__ == "__main__":
    execute_pipeline(hello_pipeline)   # Hello, dagster!

(2) Dagster CLI

$ dagster pipeline execute -f

(3) Dagit web UI

$ dagit -f


Next, jump right into our tutorial, or read our complete documentation. If you're actively using Dagster or have questions on getting started, we'd love to hear from you:


For details on contributing or running the project for development, check out our contributing guide.


Dagster works with the tools and systems that you're already using with your data, including:

Integration Dagster Library
Apache Airflow dagster-airflow
Allows Dagster pipelines to be scheduled and executed, either containerized or uncontainerized, as Apache Airflow DAGs.
Apache Spark dagster-spark · dagster-pyspark
Libraries for interacting with Apache Spark and PySpark.
Dask dagster-dask
Provides a Dagster integration with Dask / Dask.Distributed.
Datadog dagster-datadog
Provides a Dagster resource for publishing metrics to Datadog.
 /  Jupyter / Papermill dagstermill
Built on the papermill library, dagstermill is meant for integrating productionized Jupyter notebooks into dagster pipelines.
PagerDuty dagster-pagerduty
A library for creating PagerDuty alerts from Dagster workflows.
Snowflake dagster-snowflake
A library for interacting with the Snowflake Data Warehouse.
Cloud Providers
AWS dagster-aws
A library for interacting with Amazon Web Services. Provides integrations with Cloudwatch, S3, EMR, and Redshift.
Azure dagster-azure
A library for interacting with Microsoft Azure.
GCP dagster-gcp
A library for interacting with Google Cloud Platform. Provides integrations with GCS, BigQuery, and Cloud Dataproc.

This list is growing as we are actively building more integrations, and we welcome contributions!