Skip to content

feat: add Slack notifications for the post-deploy assets check#16864

Merged
stevejalim merged 1 commit intomainfrom
add-post-deploy-asset-check-notifications
Nov 14, 2025
Merged

feat: add Slack notifications for the post-deploy assets check#16864
stevejalim merged 1 commit intomainfrom
add-post-deploy-asset-check-notifications

Conversation

@stevejalim
Copy link
Contributor

Bedrock port of mozmeao/springfield#780

@stevejalim stevejalim requested a review from janbrasna November 14, 2025 12:13
@stevejalim stevejalim merged commit e82d7a9 into main Nov 14, 2025
4 checks passed
@stevejalim stevejalim deleted the add-post-deploy-asset-check-notifications branch November 14, 2025 13:53
Copy link
Collaborator

@janbrasna janbrasna left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

I haven't seen the Slack notifications yet, but figured you were satisfied with how they turned out, and the sparse checkout works as expected, before porting here too. 🚢

Comment on lines +39 to 44
slack_bot_token: ${{ env.SLACK_BOT_TOKEN }}
status: info

asset-check:
runs-on: ubuntu-latest
steps:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Noticed this before, but actually prefer it this way — the main job no longer waits for the initial notifier to finish before queuing for available runner — starting the important stuff right away in parallel, no matter what slack notifier… This can easily save 10sec+ from the run time.

The only difference is before, the run visualization connected the jobs like this, depending on each other:

Screenshot 2025-11-18 at 17 05 19

This one plots that disconnected, as is the reality/logic of these:

Screenshot 2025-11-18 at 17 05 45

I'd deliberately keep it that way for speeding things up, just thought it's worth noting so folks are not surprised seeing the diagram for the first time.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah - i wondered about chaining vs triggering them in parallel - faster is better really, and if for some reason the Slack API call fails (eg outage), we'll still run the actual checks.

Might move the others to this pattern if we find no downsides in use

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 my thoughts exactly, so I'll simplify others to this pattern when I'm in the neighborhood…

(FWIW, Slack API action has an escape hatch that on auth, API, HTTP issues, errors, timeouts and other outages and fails it does not mark the run as failed, and only annotates, unless explicitly set to fail hard — so it should never fail the pipeline actually. But good to trigger in parallel and not depend on its results in any way…)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants