wiki docs generation - fixups and fork ops #22
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
With #21 landed, found a few issues:
actions/andworkflows/directories existed in the repo)GH_NAMEwas wrong; it was supposed to be a username.So this PR makes the following changes:
SHOULD_RUNthat checks whether we're running in the main repo, or whether a secret namedGENERATE_WIKI_PAGESis set toyesSHOULD_RUNis true.internal/ansible/**to the list of paths that trigger the workflowSo, on forks, this workflow will no longer fail. If you want it to run on your fork, you must enable the wiki, and you must set the secret in your fork.
Without the above, all of the job's steps will be skipped, however the job will still run (allocate a VM, build the action container, etc.). This is because we cannot use the
secretscontext nor theenvcontext in thejobs.<job_id>.ifconditional,s o there's no way we can use the secret trigger method AND skip the job as a whole.Further thoughts: we might have more workflows (certain types of tests and other repo-level stuff, like maybe what @felixfontein proposed in #4 ), that should be opt-in or not run at all on forks. One way we can avoid putting the conditionals on every step, and instead use job-level conditionals, is to run a separate job that figures out the values, and sets outputs.
The jobs that need the information could use
needs:to wait on that first job. I really don't like that pattern, but it's one of the only ways to get around some of the more frustrating limitations in GHA.If we did that, we might consider a shared workflow for that initial fork-checking job, that way it can be re-used by this and future workflows.
Perhaps I will wait until the second workflow that needs this pattern before implementing this...