-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[MAINTENANCE] Remove runme fixtures/stages and enable docs-integration to run automatically #7812
Conversation
✅ Deploy Preview for niobium-lead-7998 canceled.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! In the future we might want to add the manual trigger as an or
i.e. runs on all PRs and also when we want to manually run it. But not necessary for this PR. Thanks for turning these tests back on!
tests/integration/test_definitions/** | ||
tests/integration/test_script_runner.py |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice catch!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea for sure... it's nice to manually trigger when needed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But we are gonna talk about a bunch of potential pipeline changes at the arch meeting tomorrow. I think we should make all/most pipelines and stages manually runnable.
* develop: [FEATURE] Enable passing "spark_config" through to "SparkDFExecutionEngine" constructor as arguments to "add_spark*()" Fluent Datasources methods. (#7810) [MAINTENANCE] Remove runme fixtures/stages and enable docs-integration to run automatically (#7812) [MAINTENANCE] docs-integration re-start (#7735)
* develop: (29 commits) [DOCS] Update docs for how_to_initialize_a_filesystem_data_context_in_python (#7831) [MAINTENANCE] Clean up: Remove duplicated fixture and utilize deeper filtering mechanism for configuration assertions. (#7825) [MAINTENANCE] Enable Spark-S3 Integration tests on Azure CI/CD (#7819) [MAINTENANCE] FDS - Datasources can rebuild their own asset data_connectors (#7826) [DOCS] Prerequisites Cleanup (#7811) [BUGFIX] Azure Package Presence/Absence Tests Strengthening (#7818) [RELEASE] 0.16.11 (#7824) [MAINTENANCE] Fix pin count. (#7823) [BUGFIX] Upper bound `pyathena` due to breaking API in V3 (#7821) [MAINTENANCE] Fix linting error. (#7820) [BUGFIX] Cloud - Fix FDS Asset has no attribute `_data_connector` (#7813) [BUGFIX] AWS Docs reference clash (#7817) [FEATURE] Enable passing "spark_config" through to "SparkDFExecutionEngine" constructor as arguments to "add_spark*()" Fluent Datasources methods. (#7810) [MAINTENANCE] Remove runme fixtures/stages and enable docs-integration to run automatically (#7812) [MAINTENANCE] docs-integration re-start (#7735) [MAINTENANCE] Lint `tests/checkpoint` & `tests/execution_engine` (#7804) [BUGFIX] Repair handling of regular expressions partitioning for cloud file storage environments utilizing prefix directive. (#7798) [MAINTENANCE] Dont run runme_script_runner_tests stage on forks (#7807) [DOCS] FDS Deployment Pattern - AWS: Spark and S3 (#7775) [MAINTENANCE] Update all pytest calls in CI to show reason skipped (#7806) ...
Changes proposed in this pull request: