Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Disable sensor data collection #80

Merged

Conversation

juliakreger
Copy link

Sensor data collection was intended for an earlier use of
the ironic-image and resulting ironic container image in order
to supply data to prometheus about hardware that has not yet
been deployed upon. Since the deicsion was made not to leverage
the data, it seems pointless to collect sensor data.

As such, disable sensor data collection.

Sensor data collection was intended for an earlier use of
the ironic-image and resulting ironic container image in order
to supply data to prometheus about hardware that has not yet
been deployed upon. Since the deicsion was made not to leverage
the data, it seems pointless to collect sensor data.

As such, disable sensor data collection.
@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label May 27, 2020
@elfosardo
Copy link

/lgtm

@openshift-ci-robot
Copy link

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: elfosardo, juliakreger

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:
  • OWNERS [elfosardo,juliakreger]

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label May 28, 2020
@openshift-bot
Copy link

/retest

Please review the full test history for this PR and help us cut down flakes.

4 similar comments
@openshift-bot
Copy link

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link

/retest

Please review the full test history for this PR and help us cut down flakes.

@elfosardo
Copy link

/retest

@openshift-bot
Copy link

/retest

Please review the full test history for this PR and help us cut down flakes.

@dhellmann
Copy link

Do we want to make this change upstream, too?

@juliakreger
Copy link
Author

@dhellmann I kind of want to say no. Openshift specific usage is not going to leverage this. Upstream could one day use it.

@dhellmann
Copy link

@dhellmann I kind of want to say no. Openshift specific usage is not going to leverage this. Upstream could one day use it.

I'm not even sure anyone else is aware this feature exists, but you may be right.

@openshift-bot
Copy link

/retest

Please review the full test history for this PR and help us cut down flakes.

11 similar comments
@openshift-bot
Copy link

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link

/retest

Please review the full test history for this PR and help us cut down flakes.

5 similar comments
@openshift-bot
Copy link

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-merge-robot openshift-merge-robot merged commit 529e212 into openshift:master Jun 3, 2020
andfasano pushed a commit to andfasano/ironic-image that referenced this pull request Nov 18, 2020
Add set -e to all the entrypoint scripts
andfasano pushed a commit to andfasano/ironic-image that referenced this pull request Nov 18, 2020
I've seen some situations where mariadb is not up soon enough for the
dbsync, despite the 10 retries done inside ironic-dbsync.

In this situation we start the ironic services anyway and since openshift#80 got reverted,
we can't rely on pod/container restart to handle this, so lets retry inside
the script logging a warning instead.
MahnoorAsghar pushed a commit to MahnoorAsghar/ironic-image that referenced this pull request Apr 19, 2024
I've seen some situations where mariadb is not up soon enough for the
dbsync, despite the 10 retries done inside ironic-dbsync.

In this situation we start the ironic services anyway and since openshift#80 got reverted,
we can't rely on pod/container restart to handle this, so lets retry inside
the script logging a warning instead.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants