Skip to content

Conversation

@sgomezvillamor
Copy link
Contributor

Just adds some docs

@codecov
Copy link

codecov bot commented Oct 31, 2025

❌ 23 Tests Failed:

Tests completed Failed Passed Skipped
136 23 113 1
View the top 3 failed test(s) by shortest run time
tests.integration.snowflake.test_snowflake::test_snowflake_schema_extraction_one_table_multiple_views
Stack Traces | 0.739s run time
Metadata files differ (use `pytest --update-golden-files` to update):
Urn removed, urn:li:query:view_urn%3Ali%3Adataset%3A%28urn%3Ali%3AdataPlatform%3Asnowflake%2Ctest_db.test_schema.view_3%2CPROD%29
Urn removed, urn:li:query:view_urn%3Ali%3Adataset%3A%28urn%3Ali%3AdataPlatform%3Asnowflake%2Ctest_db.test_schema.view_2%2CPROD%29

Urn changed, urn:li:dataset:(urn:li:dataPlatform:snowflake,test_db.test_schema.view_3,PROD):
<upstreamLineage> removed

Urn changed, urn:li:dataset:(urn:li:dataPlatform:snowflake,test_db.test_schema.view_2,PROD):
<upstreamLineage> removed
tests.integration.powerbi.test_m_parser::test_mysql_odbc_query_without_dsn_mapping
Stack Traces | 1.64s run time
@pytest.mark.integration
    def test_mysql_odbc_query_without_dsn_mapping():
        """Test ODBC query parsing without dsn_to_database_schema mapping falls back to default behavior."""
        # Query with unqualified table reference
        odbc_query_unqualified = (
            'let\n    Source = Odbc.Query("driver={MySQL ODBC 9.2 Unicode Driver};'
            'server=10.1.10.1;database=employees;dsn=unmapped_dsn", '
            '"SELECT id, name FROM users")\nin\n    Source'
        )
    
        table: powerbi_data_classes.Table = powerbi_data_classes.Table(
            columns=[],
            measures=[],
            expression=odbc_query_unqualified,
            name="Users",
            full_name="Users.Table",
        )
    
        reporter = PowerBiDashboardSourceReport()
    
        # Test without dsn_to_database_schema mapping
        ctx, config, platform_instance_resolver = get_default_instances(
            {"dsn_to_platform_name": {"unmapped_dsn": "mysql"}}
        )
    
        data_platform_tables: List[DataPlatformTable] = parser.get_upstream_tables(
            table,
            reporter,
            ctx=ctx,
            config=config,
            platform_instance_resolver=platform_instance_resolver,
        )[0].upstreams
    
>       assert len(data_platform_tables) == 1
E       assert 0 == 1
E        +  where 0 = len([])

.../integration/powerbi/test_m_parser.py:1395: AssertionError
tests.integration.powerbi.test_m_parser::test_mysql_odbc_query_with_dsn_to_database_schema_mapping
Stack Traces | 1.65s run time
@pytest.mark.integration
    def test_mysql_odbc_query_with_dsn_to_database_schema_mapping():
        """Test ODBC query parsing with dsn_to_database_schema mapping using database.schema format (dsn: database.schema)."""
        # Query with unqualified table reference
        odbc_query_unqualified = (
            'let\n    Source = Odbc.Query("driver={PostgreSQL ODBC Driver};'
            'server=pg.example.com;database=warehouse;dsn=warehouse_dsn", '
            '"SELECT order_id, customer_id FROM orders")\nin\n    Source'
        )
    
        table: powerbi_data_classes.Table = powerbi_data_classes.Table(
            columns=[],
            measures=[],
            expression=odbc_query_unqualified,
            name="Orders",
            full_name="Orders.Table",
        )
    
        reporter = PowerBiDashboardSourceReport()
    
        # Test with database.schema mapping: "warehouse_dsn" -> "warehouse.sales" (includes schema)
        ctx, config, platform_instance_resolver = get_default_instances(
            {
                "dsn_to_platform_name": {"warehouse_dsn": "postgres"},
                "dsn_to_database_schema": {"warehouse_dsn": "warehouse.sales"},
            }
        )
    
        data_platform_tables: List[DataPlatformTable] = parser.get_upstream_tables(
            table,
            reporter,
            ctx=ctx,
            config=config,
            platform_instance_resolver=platform_instance_resolver,
        )[0].upstreams
    
>       assert len(data_platform_tables) == 1
E       assert 0 == 1
E        +  where 0 = len([])

.../integration/powerbi/test_m_parser.py:1355: AssertionError
tests.integration.powerbi.test_m_parser::test_mysql_odbc_query_with_dsn_to_database_mapping
Stack Traces | 1.66s run time
@pytest.mark.integration
    def test_mysql_odbc_query_with_dsn_to_database_mapping():
        """Test ODBC query parsing with dsn_to_database_schema mapping using database-only format (dsn: database)."""
        # Query with unqualified table reference "transaction" instead of "bank_demo.transaction"
        odbc_query_unqualified = (
            'let\n    Source = Odbc.Query("driver={MySQL ODBC 9.2 Unicode Driver};'
            'server=10.1.10.1;database=employees;dsn=testdb01", '
            '"SELECT transaction_id, account_id FROM transaction")\nin\n    Source'
        )
    
        table: powerbi_data_classes.Table = powerbi_data_classes.Table(
            columns=[],
            measures=[],
            expression=odbc_query_unqualified,
            name="BankTransactions",
            full_name="BankTransactions.Table",
        )
    
        reporter = PowerBiDashboardSourceReport()
    
        # Test with database-only mapping: "testdb01" -> "bank_demo" (no schema)
        ctx, config, platform_instance_resolver = get_default_instances(
            {
                "dsn_to_platform_name": {"testdb01": "mysql"},
                "dsn_to_database_schema": {"testdb01": "bank_demo"},
            }
        )
    
        data_platform_tables: List[DataPlatformTable] = parser.get_upstream_tables(
            table,
            reporter,
            ctx=ctx,
            config=config,
            platform_instance_resolver=platform_instance_resolver,
        )[0].upstreams
    
>       assert len(data_platform_tables) == 1
E       assert 0 == 1
E        +  where 0 = len([])

.../integration/powerbi/test_m_parser.py:1312: AssertionError
tests.integration.powerbi.test_m_parser::test_mysql_odbc_query
Stack Traces | 1.68s run time
@pytest.mark.integration
    def test_mysql_odbc_query():
        q: str = M_QUERIES[36]
        table: powerbi_data_classes.Table = powerbi_data_classes.Table(
            columns=[],
            measures=[],
            expression=q,
            name="BankDemoTransactions",
            full_name="BankDemoTransactions.Transactions",
        )
    
        reporter = PowerBiDashboardSourceReport()
    
        ctx, config, platform_instance_resolver = get_default_instances()
    
        data_platform_tables: List[DataPlatformTable] = parser.get_upstream_tables(
            table,
            reporter,
            ctx=ctx,
            config=config,
            platform_instance_resolver=platform_instance_resolver,
        )[0].upstreams
    
>       assert len(data_platform_tables) == 1
E       assert 0 == 1
E        +  where 0 = len([])

.../integration/powerbi/test_m_parser.py:1269: AssertionError
tests.integration.powerbi.test_ingest::test_mysql_odbc_query_ingest
Stack Traces | 1.85s run time
Metadata files differ (use `pytest --update-golden-files` to update):
Urn changed, urn:li:dataset:(urn:li:dataPlatform:powerbi,employeereport.employees_employees,PROD):
<upstreamLineage> removed
tests.integration.grafana.test_grafana::test_grafana_ingest
Stack Traces | 2.3s run time
Metadata files differ (use `pytest --update-golden-files` to update):
{'values_changed': {"root[11]['proposedSnapshot']['com.linkedin.pegasus2avro.metadata.snapshot.DatasetSnapshot']['aspects'][3]['com.linkedin.pegasus2avro.schema.SchemaMetadata']['fields'][1]['nativeDataType']": {'new_value': 'sql_column',
                                                                                                                                                                                                                    'old_value': ''},
                    "root[11]['proposedSnapshot']['com.linkedin.pegasus2avro.metadata.snapshot.DatasetSnapshot']['aspects'][3]['com.linkedin.pegasus2avro.schema.SchemaMetadata']['fields'][1]['type']['type']": {'new_value': {'com.linkedin.pegasus2avro.schema.StringType': {}},
                                                                                                                                                                                                                  'old_value': {'com.linkedin.pegasus2avro.schema.NullType': {}}},
                    "root[11]['proposedSnapshot']['com.linkedin.pegasus2avro.metadata.snapshot.DatasetSnapshot']['aspects'][3]['com.linkedin.pegasus2avro.schema.SchemaMetadata']['fields'][2]['nativeDataType']": {'new_value': 'sql_column',
                                                                                                                                                                                                                    'old_value': ''},
                    "root[11]['proposedSnapshot']['com.linkedin.pegasus2avro.metadata.snapshot.DatasetSnapshot']['aspects'][3]['com.linkedin.pegasus2avro.schema.SchemaMetadata']['fields'][2]['type']['type']": {'new_value': {'com.linkedin.pegasus2avro.schema.StringType': {}},
                                                                                                                                                                                                                  'old_value': {'com.linkedin.pegasus2avro.schema.NullType': {}}},
                    "root[19]['aspect']['json']['fields'][1]['schemaField']['nativeDataType']": {'new_value': 'sql_column',
                                                                                                 'old_value': ''},
                    "root[19]['aspect']['json']['fields'][1]['schemaField']['type']['type']": {'new_value': {'com.linkedin.schema.StringType': {}},
                                                                                               'old_value': {'com.linkedin.schema.NullType': {}}},
                    "root[19]['aspect']['json']['fields'][2]['schemaField']['nativeDataType']": {'new_value': 'sql_column',
                                                                                                 'old_value': ''},
                    "root[19]['aspect']['json']['fields'][2]['schemaField']['type']['type']": {'new_value': {'com.linkedin.schema.StringType': {}},
                                                                                               'old_value': {'com.linkedin.schema.NullType': {}}},
                    "root[19]['aspect']['json']['fields'][3]['schemaField']['nativeDataType']": {'new_value': 'sql_column',
                                                                                                 'old_value': ''},
                    "root[19]['aspect']['json']['fields'][3]['schemaField']['type']['type']": {'new_value': {'com.linkedin.schema.StringType': {}},
                                                                                               'old_value': {'com.linkedin.schema.NullType': {}}},
                    "root[22]['proposedSnapshot']['com.linkedin.pegasus2avro.metadata.snapshot.DatasetSnapshot']['aspects'][3]['com.linkedin.pegasus2avro.schema.SchemaMetadata']['fields'][3]['nativeDataType']": {'new_value': 'sql_column',
                                                                                                                                                                                                                    'old_value': ''},
                    "root[22]['proposedSnapshot']['com.linkedin.pegasus2avro.metadata.snapshot.DatasetSnapshot']['aspects'][3]['com.linkedin.pegasus2avro.schema.SchemaMetadata']['fields'][3]['type']['type']": {'new_value': {'com.linkedin.pegasus2avro.schema.StringType': {}},
                                                                                                                                                                                                                  'old_value': {'com.linkedin.pegasus2avro.schema.NullType': {}}},
                    "root[30]['aspect']['json']['fields'][3]['schemaField']['nativeDataType']": {'new_value': 'sql_column',
                                                                                                 'old_value': ''},
                    "root[30]['aspect']['json']['fields'][3]['schemaField']['type']['type']": {'new_value': {'com.linkedin.schema.StringType': {}},
                                                                                               'old_value': {'com.linkedin.schema.NullType': {}}},
                    "root[30]['aspect']['json']['fields'][4]['schemaField']['nativeDataType']": {'new_value': 'sql_column',
                                                                                                 'old_value': ''},
                    "root[30]['aspect']['json']['fields'][4]['schemaField']['type']['type']": {'new_value': {'com.linkedin.schema.StringType': {}},
                                                                                               'old_value': {'com.linkedin.schema.NullType': {}}},
                    "root[30]['aspect']['json']['fields'][5]['schemaField']['nativeDataType']": {'new_value': 'sql_column',
                                                                                                 'old_value': ''},
                    "root[30]['aspect']['json']['fields'][5]['schemaField']['type']['type']": {'new_value': {'com.linkedin.schema.StringType': {}},
                                                                                               'old_value': {'com.linkedin.schema.NullType': {}}},
                    "root[30]['aspect']['json']['fields'][6]['schemaField']['nativeDataType']": {'new_value': 'sql_column',
                                                                                                 'old_value': ''},
                    "root[30]['aspect']['json']['fields'][6]['schemaField']['type']['type']": {'new_value': {'com.linkedin.schema.StringType': {}},
                                                                                               'old_value': {'com.linkedin.schema.NullType': {}}},
                    "root[41]['aspect']['json']['fields'][1]['schemaField']['nativeDataType']": {'new_value': 'sql_column',
                                                                                                 'old_value': 'BIGINT'},
                    "root[41]['aspect']['json']['fields'][1]['schemaField']['type']['type']": {'new_value': {'com.linkedin.schema.StringType': {}},
                                                                                               'old_value': {'com.linkedin.schema.NumberType': {}}}}}
tests.integration.powerbi.test_m_parser::test_double_quotes_in_alias
Stack Traces | 2.51s run time
def test_double_quotes_in_alias():
        # SELECT CAST(sales_date AS DATE) AS \"\"Date\"\" in query
        q = 'let \n Source = Sql.Database("abc.com", "DB", [Query="SELECT CAST(sales_date AS DATE) AS ""Date"",#(lf) SUM(cshintrpret) / 60.0      AS ""Total Order All Items"",#(lf)#(tab)#(tab)#(tab)  SUM(cshintrpret) / 60.0 - LAG(SUM(cshintrpret) / 60.0, 1) OVER (ORDER BY CAST(sales_date AS DATE)) AS ""Total minute difference"",#(lf)#(tab)#(tab)#(tab)  SUM(sale_price)  / 60.0 - LAG(SUM(sale_price)  / 60.0, 1) OVER (ORDER BY CAST(sales_date AS DATE)) AS ""Normal minute difference""#(lf)        FROM   [DB].[dbo].[sales_t]#(lf)        WHERE  sales_date >= GETDATE() - 365#(lf)        GROUP  BY CAST(sales_date AS DATE),#(lf)#(tab)#(tab)CAST(sales_date AS TIME);"]) \n in \n Source'
    
        lineage: List[datahub.ingestion.source.powerbi.m_query.data_classes.Lineage] = (
            get_data_platform_tables_with_dummy_table(q=q)
        )
    
        assert len(lineage) == 1
    
        data_platform_tables = lineage[0].upstreams
    
>       assert len(data_platform_tables) == 1
E       assert 0 == 1
E        +  where 0 = len([])

.../integration/powerbi/test_m_parser.py:1123: AssertionError
tests.integration.powerbi.test_m_parser::test_sqlglot_parser
Stack Traces | 3.87s run time
def test_sqlglot_parser():
        table: powerbi_data_classes.Table = powerbi_data_classes.Table(
            expression=M_QUERIES[24],
            name="SALES_TARGET",
            full_name="dev.public.sales",
        )
        reporter = PowerBiDashboardSourceReport()
    
        ctx, config, platform_instance_resolver = get_default_instances(
            override_config={
                "server_to_platform_instance": {
                    "bu10758.ap-unknown-2.fakecomputing.com": {
                        "platform_instance": "sales_deployment",
                        "env": "PROD",
                    }
                },
                "native_query_parsing": True,
                "enable_advance_lineage_sql_construct": True,
            }
        )
    
        lineage: List[datahub.ingestion.source.powerbi.m_query.data_classes.Lineage] = (
            parser.get_upstream_tables(
                table,
                reporter,
                ctx=ctx,
                config=config,
                platform_instance_resolver=platform_instance_resolver,
            )
        )
    
        data_platform_tables: List[DataPlatformTable] = lineage[0].upstreams
    
>       assert len(data_platform_tables) == 2
E       assert 0 == 2
E        +  where 0 = len([])

.../integration/powerbi/test_m_parser.py:784: AssertionError
tests.integration.powerbi.test_m_parser::test_empty_string_in_m_query
Stack Traces | 3.92s run time
def test_empty_string_in_m_query():
        # TRIM(TRIM(TRIM(AGENT_NAME, '\"\"'), '+'), '\\'') is in Query
        q = "let\n  Source = Value.NativeQuery(Snowflake.Databases(\"bu10758.ap-unknown-2.fakecomputing.com\",\"operations_analytics_warehouse_prod\",[Role=\"OPERATIONS_ANALYTICS_MEMBER\"]){[Name=\"OPERATIONS_ANALYTICS\"]}[Data], \"select #(lf)UPPER(REPLACE(AGENT_NAME,'-','')) AS CLIENT_DIRECTOR,#(lf)TRIM(TRIM(TRIM(AGENT_NAME, '\"\"'), '+'), '\\'') AS TRIM_AGENT_NAME,#(lf)TIER,#(lf)UPPER(MANAGER),#(lf)TEAM_TYPE,#(lf)DATE_TARGET,#(lf)MONTHID,#(lf)TARGET_TEAM,#(lf)SELLER_EMAIL,#(lf)concat((UPPER(REPLACE(AGENT_NAME,'-',''))), MONTHID) as AGENT_KEY,#(lf)UNIT_TARGET AS SME_Quota,#(lf)AMV_TARGET AS Revenue_Quota,#(lf)SERVICE_QUOTA,#(lf)BL_TARGET,#(lf)SOFTWARE_QUOTA as Software_Quota#(lf)#(lf)from OPERATIONS_ANALYTICS.TRANSFORMED_PROD.V_SME_UNIT_TARGETS inner join OPERATIONS_ANALYTICS.TRANSFORMED_PROD.V_SME_UNIT #(lf)#(lf)where YEAR_TARGET >= 2022#(lf)and TEAM_TYPE = 'Accounting'#(lf)and TARGET_TEAM = 'Enterprise'#(lf)AND TIER = 'Client Director'\", null, [EnableFolding=true])\nin\n    Source"
    
        lineage: List[datahub.ingestion.source.powerbi.m_query.data_classes.Lineage] = (
            get_data_platform_tables_with_dummy_table(q=q)
        )
    
        assert len(lineage) == 1
    
        data_platform_tables = lineage[0].upstreams
    
>       assert len(data_platform_tables) == 2
E       assert 0 == 2
E        +  where 0 = len([])

.../integration/powerbi/test_m_parser.py:1099: AssertionError
tests.integration.powerbi.test_m_parser::test_sqlglot_parser_2
Stack Traces | 4.23s run time
def test_sqlglot_parser_2():
        table: powerbi_data_classes.Table = powerbi_data_classes.Table(
            expression=M_QUERIES[28],
            name="SALES_TARGET",
            full_name="dev.public.sales",
        )
        reporter = PowerBiDashboardSourceReport()
    
        ctx, config, platform_instance_resolver = get_default_instances(
            override_config={
                "server_to_platform_instance": {
                    "0DD93C6BD5A6.snowflakecomputing.com": {
                        "platform_instance": "sales_deployment",
                        "env": "PROD",
                    }
                },
                "native_query_parsing": True,
                "enable_advance_lineage_sql_construct": True,
            }
        )
    
        lineage: List[datahub.ingestion.source.powerbi.m_query.data_classes.Lineage] = (
            parser.get_upstream_tables(
                table,
                reporter,
                ctx=ctx,
                config=config,
                platform_instance_resolver=platform_instance_resolver,
            )
        )
    
        data_platform_tables: List[DataPlatformTable] = lineage[0].upstreams
    
>       assert len(data_platform_tables) == 4
E       assert 0 == 4
E        +  where 0 = len([])

.../integration/powerbi/test_m_parser.py:923: AssertionError
tests.integration.test_great_expectations::test_ge_ingest[test_checkpoint_2-ge_mcps_golden_2.json]
Stack Traces | 4.4s run time
Metadata files differ (use `pytest --update-golden-files` to update):
Urn changed, urn:li:assertion:a9aec2b7ca9107b6616bd498ca396ca9:
2nd <assertionRunEvent> removed

Urn changed, urn:li:assertion:a545416429f257a5be8e7022eb2aabdb:
2nd <assertionRunEvent> removed

Urn changed, urn:li:assertion:b926615c45601b7a7cbad9e860e50629:
2nd <assertionRunEvent> removed

Urn changed, urn:li:assertion:54977b85189e46d05e88310873dcb75f:
2nd <assertionRunEvent> removed

Urn changed, urn:li:assertion:2a12a5bf6ca28cfdffa6e73debdd0591:
2nd <assertionRunEvent> removed
tests.integration.powerbi.test_m_parser::test_snowflake_double_double_quotes
Stack Traces | 4.41s run time
@pytest.mark.integration
    def test_snowflake_double_double_quotes():
        q = M_QUERIES[30]
    
        lineage: List[datahub.ingestion.source.powerbi.m_query.data_classes.Lineage] = (
            get_data_platform_tables_with_dummy_table(q=q)
        )
    
        assert len(lineage) == 1
    
        data_platform_tables = lineage[0].upstreams
    
>       assert len(data_platform_tables) == 1
E       assert 0 == 1
E        +  where 0 = len([])

.../integration/powerbi/test_m_parser.py:980: AssertionError
tests.integration.tableau.test_tableau_ingest::test_tableau_signout_timeout
Stack Traces | 4.54s run time
Metadata files differ (use `pytest --update-golden-files` to update):
Urn changed, urn:li:dataset:(urn:li:dataPlatform:tableau,10c6297d-0dbd-44f1-b1ba-458bea446513,PROD):
<upstreamLineage> changed:
	Value of root[0] changed from 
		{"urn": "urn:li:dataset:(urn:li:dataPlatform:tableau,10c6297d-0dbd-44f1-b1ba-458bea446513,PROD)", "change_type": "UPSERT", "aspect_name": "upstreamLineage", "aspect": "<aspect>", "headers": null} to 
		{"urn": "urn:li:dataset:(urn:li:dataPlatform:tableau,10c6297d-0dbd-44f1-b1ba-458bea446513,PROD)", "change_type": "UPSERT", "aspect_name": "upstreamLineage", "aspect": "<aspect>", "headers": null}.
tests.integration.powerbi.test_m_parser::test_databricks_multicloud
Stack Traces | 7.33s run time
def test_databricks_multicloud():
        q = M_QUERIES[31]
    
        lineage: List[datahub.ingestion.source.powerbi.m_query.data_classes.Lineage] = (
            get_data_platform_tables_with_dummy_table(q=q)
        )
    
        assert len(lineage) == 1
    
        data_platform_tables = lineage[0].upstreams
    
>       assert len(data_platform_tables) == 1
E       assert 0 == 1
E        +  where 0 = len([])

.../integration/powerbi/test_m_parser.py:999: AssertionError
tests.integration.powerbi.test_m_parser::test_mssql_drop_with_select
Stack Traces | 13.1s run time
def test_mssql_drop_with_select():
        q = M_QUERIES[33]
    
        lineage: List[datahub.ingestion.source.powerbi.m_query.data_classes.Lineage] = (
            get_data_platform_tables_with_dummy_table(q=q)
        )
    
        assert len(lineage) == 1
    
        data_platform_tables = lineage[0].upstreams
    
>       assert len(data_platform_tables) == 1
E       assert 0 == 1
E        +  where 0 = len([])

.../integration/powerbi/test_m_parser.py:1037: AssertionError
tests.integration.test_great_expectations::test_ge_ingest[test_checkpoint-ge_mcps_golden.json]
Stack Traces | 14.8s run time
Metadata files differ (use `pytest --update-golden-files` to update):
Urn changed, urn:li:assertion:0d2562733ca1096b6a89287e37b87e9e:
2nd <assertionRunEvent> removed

Urn changed, urn:li:assertion:6a54a953afa6403d7b6dffbf02ff3de4:
2nd <assertionRunEvent> removed

Urn changed, urn:li:assertion:e650fdf72c472afccfda2eb6e2e0152e:
2nd <assertionRunEvent> removed

Urn changed, urn:li:assertion:ab05b249c82cb5cdeb19743462125fae:
2nd <assertionRunEvent> removed

Urn changed, urn:li:assertion:2e3e33eaa148f19f6a6e37ba0c6fdb2f:
2nd <assertionRunEvent> removed
tests.integration.test_plugin::test_airflow_plugin[v2_custom_operator_sql_parsing]
Stack Traces | 40s run time
Metadata files differ (use `pytest --update-golden-files` to update):
Urn removed, urn:li:dataset:(urn:li:dataPlatform:athena,athena_db.default.my_output_table,PROD)
Urn removed, urn:li:dataset:(urn:li:dataPlatform:athena,athena_db.default.my_input_table,PROD)

Urn changed, urn:li:dataJob:(urn:li:dataFlow:(airflow,custom_operator_sql_parsing,prod),transform_cost_table):
2nd <dataJobInputOutput> removed
2nd <dataJobInfo> removed
tests.integration.test_plugin::test_airflow_plugin[v2_snowflake_operator]
Stack Traces | 40.4s run time
Metadata files differ (use `pytest --update-golden-files` to update):
Urn removed, urn:li:dataset:(urn:li:dataPlatform:snowflake,datahub_test_database.datahub_test_schema.costs,PROD)
Urn removed, urn:li:dataset:(urn:li:dataPlatform:snowflake,datahub_test_database.datahub_test_schema.processed_costs,PROD)

Urn changed, urn:li:dataProcessInstance:3161034cc84e16a7c5e1906225734747:
<dataProcessInstanceOutput> removed
<dataProcessInstanceInput> removed

Urn changed, urn:li:dataJob:(urn:li:dataFlow:(airflow,snowflake_operator,prod),transform_cost_table):
<dataJobInputOutput> changed:
	Value of root[0] changed from 
		{"urn": "urn:li:dataJob:(urn:li:dataFlow:(airflow,snowflake_operator,prod),transform_cost_table)", "change_type": "UPSERT", "aspect_name": "dataJobInputOutput", "aspect": "<aspect>", "headers": null} to 
		{"urn": "urn:li:dataJob:(urn:li:dataFlow:(airflow,snowflake_operator,prod),transform_cost_table)", "change_type": "UPSERT", "aspect_name": "dataJobInputOutput", "aspect": "<aspect>", "headers": null}.
tests.integration.test_plugin::test_airflow_plugin[v2_athena_operator]
Stack Traces | 42s run time
Metadata files differ (use `pytest --update-golden-files` to update):
Urn removed, urn:li:dataset:(urn:li:dataPlatform:athena,athena_db.costs,PROD)
Urn removed, urn:li:dataset:(urn:li:dataPlatform:athena,athena_db.processed_costs,PROD)

Urn changed, urn:li:dataProcessInstance:9cd4fbcec3a50a4988ffc5841beaf0ad:
<dataProcessInstanceOutput> removed
<dataProcessInstanceInput> removed

Urn changed, urn:li:dataJob:(urn:li:dataFlow:(airflow,athena_operator,prod),transform_cost_table):
<dataJobInputOutput> changed:
	Value of root[0] changed from 
		{"urn": "urn:li:dataJob:(urn:li:dataFlow:(airflow,athena_operator,prod),transform_cost_table)", "change_type": "UPSERT", "aspect_name": "dataJobInputOutput", "aspect": "<aspect>", "headers": null} to 
		{"urn": "urn:li:dataJob:(urn:li:dataFlow:(airflow,athena_operator,prod),transform_cost_table)", "change_type": "UPSERT", "aspect_name": "dataJobInputOutput", "aspect": "<aspect>", "headers": null}.
tests.integration.test_plugin::test_airflow_plugin[v2_bigquery_insert_job_operator]
Stack Traces | 43.5s run time
Metadata files differ (use `pytest --update-golden-files` to update):
Urn removed, urn:li:dataset:(urn:li:dataPlatform:bigquery,test_project.test_dataset.processed_costs,PROD)
Urn removed, urn:li:dataset:(urn:li:dataPlatform:bigquery,test_project.test_dataset.costs,PROD)

Urn changed, urn:li:dataJob:(urn:li:dataFlow:(airflow,bigquery_insert_job_operator,prod),insert_query_without_destination):
<dataJobInputOutput> changed:
	Value of root[0] changed from 
		{"urn": "urn:li:dataJob:(urn:li:dataFlow:(airflow,bigquery_insert_job_operator,prod),insert_query_without_destination)", "change_type": "UPSERT", "aspect_name": "dataJobInputOutput", "aspect": "<aspect>", "headers": null} to 
		{"urn": "urn:li:dataJob:(urn:li:dataFlow:(airflow,bigquery_insert_job_operator,prod),insert_query_without_destination)", "change_type": "UPSERT", "aspect_name": "dataJobInputOutput", "aspect": "<aspect>", "headers": null}.

Urn changed, urn:li:dataProcessInstance:c2b6bc237a75fe628ce6bf2c282b0e3c:
<dataProcessInstanceOutput> removed
<dataProcessInstanceInput> removed

Urn changed, urn:li:dataJob:(urn:li:dataFlow:(airflow,bigquery_insert_job_operator,prod),select_with_destination_config):
<dataJobInputOutput> changed:
	Value of root[0] changed from 
		{"urn": "urn:li:dataJob:(urn:li:dataFlow:(airflow,bigquery_insert_job_operator,prod),select_with_destination_config)", "change_type": "UPSERT", "aspect_name": "dataJobInputOutput", "aspect": "<aspect>", "headers": null} to 
		{"urn": "urn:li:dataJob:(urn:li:dataFlow:(airflow,bigquery_insert_job_operator,prod),select_with_destination_config)", "change_type": "UPSERT", "aspect_name": "dataJobInputOutput", "aspect": "<aspect>", "headers": null}.

Urn changed, urn:li:dataProcessInstance:ad5cf6e007be4b127a8729275f118c3b:
<dataProcessInstanceOutput> removed
<dataProcessInstanceInput> removed
tests.integration.test_plugin::test_airflow_plugin[v2_sqlite_operator]
Stack Traces | 52.5s run time
Metadata files differ (use `pytest --update-golden-files` to update):
Urn removed, urn:li:dataset:(urn:li:dataPlatform:sqlite,public.processed_costs,PROD)
Urn removed, urn:li:dataset:(urn:li:dataPlatform:sqlite,public.costs,PROD)

Urn changed, urn:li:dataJob:(urn:li:dataFlow:(airflow,myairflow.sqlite_operator,prod),populate_cost_table):
<dataJobInputOutput> changed:
	Item aspect['outputDatasets'][0] removed from iterable.

Urn changed, urn:li:dataProcessInstance:04e1badac1eacd1c41123d07f579fa92:
<dataProcessInstanceOutput> removed

Urn changed, urn:li:dataProcessInstance:64e5ff8f552e857b607832731e09808b:
<dataProcessInstanceOutput> removed
<dataProcessInstanceInput> removed

Urn changed, urn:li:dataJob:(urn:li:dataFlow:(airflow,myairflow.sqlite_operator,prod),transform_cost_table):
<dataJobInputOutput> changed:
	Value of root[0] changed from 
		{"urn": "urn:li:dataJob:(urn:li:dataFlow:(airflow,myairflow.sqlite_operator,prod),transform_cost_table)", "change_type": "UPSERT", "aspect_name": "dataJobInputOutput", "aspect": "<aspect>", "headers": null} to 
		{"urn": "urn:li:dataJob:(urn:li:dataFlow:(airflow,myairflow.sqlite_operator,prod),transform_cost_table)", "change_type": "UPSERT", "aspect_name": "dataJobInputOutput", "aspect": "<aspect>", "headers": null}.

Urn changed, urn:li:dataProcessInstance:07285de22276959612189d51336cc21a:
<dataProcessInstanceOutput> removed

Urn changed, urn:li:dataJob:(urn:li:dataFlow:(airflow,myairflow.sqlite_operator,prod),create_cost_table):
<dataJobInputOutput> changed:
	Value of root[0] changed from 
		{"urn": "urn:li:dataJob:(urn:li:dataFlow:(airflow,myairflow.sqlite_operator,prod),create_cost_table)", "change_type": "UPSERT", "aspect_name": "dataJobInputOutput", "aspect": "<aspect>", "headers": null} to 
		{"urn": "urn:li:dataJob:(urn:li:dataFlow:(airflow,myairflow.sqlite_operator,prod),create_cost_table)", "change_type": "UPSERT", "aspect_name": "dataJobInputOutput", "aspect": "<aspect>", "headers": null}.

Urn changed, urn:li:dataProcessInstance:bab908abccf3cd6607b50fdaf3003372:
<dataProcessInstanceOutput> removed

Urn changed, urn:li:dataJob:(urn:li:dataFlow:(airflow,myairflow.sqlite_operator,prod),cleanup_costs):
<dataJobInputOutput> changed:
	Item aspect['outputDatasets'][0] removed from iterable.

Urn changed, urn:li:dataProcessInstance:fbeed1180fa0434e02ac6f75ace87869:
<dataProcessInstanceOutput> removed

Urn changed, urn:li:dataJob:(urn:li:dataFlow:(airflow,myairflow.sqlite_operator,prod),cleanup_processed_costs):
<dataJobInputOutput> changed:
	Item aspect['outputDatasets'][0] removed from iterable.
tests.integration.powerbi.test_powerbi::test_cll_extraction
Stack Traces | 97.6s run time
Metadata files differ (use `pytest --update-golden-files` to update):
Urn changed, urn:li:dataset:(urn:li:dataPlatform:powerbi,hr_pbi_test.ms_sql_native_table,DEV):
<upstreamLineage> removed

Urn changed, urn:li:dataset:(urn:li:dataPlatform:powerbi,library-dataset.snowflake_native-query-with-join,DEV):
<upstreamLineage> removed

Urn changed, urn:li:dataset:(urn:li:dataPlatform:powerbi,library-dataset.snowflake_native-query,DEV):
<upstreamLineage> removed

To view more test analytics, go to the Test Analytics Dashboard
📋 Got 3 mins? Take this short survey to help us improve Test Analytics.

@datahub-cyborg datahub-cyborg bot added the needs-review Label for PRs that need review from a maintainer. label Oct 31, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ingestion PR or Issue related to the ingestion of metadata needs-review Label for PRs that need review from a maintainer.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants