-
Notifications
You must be signed in to change notification settings - Fork 236
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add log tagging for aggregation #1828
Add log tagging for aggregation #1828
Conversation
/cc @m1kola /hold -- need to add a commit to plumb the fields into the installer. |
Codecov Report
@@ Coverage Diff @@
## master #1828 +/- ##
==========================================
+ Coverage 41.59% 41.81% +0.22%
==========================================
Files 362 363 +1
Lines 34120 34141 +21
==========================================
+ Hits 14192 14277 +85
+ Misses 18725 18657 -68
- Partials 1203 1207 +4
|
a5e6f8e
to
54e0149
Compare
Last commit has the installer plumbing. PTAL. |
TODO: Deprovision pod logs?? |
Approach looks sane to me. I have opened openshift/enhancements#1197 to discuss
FWIW logs from the installer in |
Thanks for the look!
👀
Cool, building...
Thanks. It looks like we're invoking the go code directly -- but we're doing so with a logger that I can decorate up front. Just have to decide how to pass the tags into |
90dc4ec
to
83bff85
Compare
✓ TODO:
|
eb9a9d8
to
0e6eb95
Compare
Tested this (at 0e6eb95) with openshift/installer#6161 (at 0dc2623) and everything seems to be working as expected. Snippet of provision pod logs:
|
UT is WIP |
d5f6399
to
e126229
Compare
Still not quiiiite done with UT |
e126229
to
7775846
Compare
Add support for a `hive.openshift.io/additional-log-fields` annotation on hive-owned CRs. If present, it is parsed as a JSON map containing arbitrary key/value pairs which are included in all log lines for controllers handling the CR. If the value of the annotation is nonempty and parseable, we will add `component=hive` to whatever it contains. (The assumption is that this is being used for log aggregation by an external consumer that needs to know that hive's logs came from hive.) PROVISOS [1]: - Our controllers currently reconcile on all and only the following objects (so annotating others will have no effect): - ClusterClaim - ClusterDeployment - ClusterDeprovision - ClusterPool - ClusterProvision - DNSZone - FakeClusterInstall - MachinePool The clusterpoolnamespace controller reconciles on Namespace; but we *do not* support these annotations there. - Log fields are added to the logger for a given controller once it has loaded the CR. So there may be a couple of log messages that will be missed. They should be unimportant for consumers -- things like the initial "Reconciling $namespace/$name". - The hive code *generates* CRs in certain circumstances. - ClusterProvision and ClusterDeprovision are generated when the CD is created or deleted, respectively. They inherit the annotation *at the time they are generated*. If you want the fields updated after that, you must edit the ClusterProvision/ClusterDeprovision object directly. - DNSZone can be generated if not already present for clusters with managed DNS. The clusterdeployment controller will generate the DNSZone with the annotation from the CD, but will also *sync* it from the CD on each reconcile. - The ClusterPool controller generates ClusterDeployments. We do *not* copy the annotation from the ClusterPool.Metadata; you would have to put it into ClusterPool.Spec.Annotations instead. - ClusterDeployment and MachinePool are generated by `hiveutil create cluster`. You can use the `-a/--annotations` flag to initialize the ClusterDeployment with the annotation [2]. The machinepool controller will *sync* it from the CD to the MachinePool on each reconcile. - We currently do *not* support passing (any) annotations into `hiveutil clusterpool create-pool` or `hiveutil clusterpool claim`. In the future we may wish to add `-a/--annotations` like we have for `create cluster`. [1] https://www.youtube.com/watch?v=pb0PuaikL4w [2] The CLI syntax is a bit tricky. This wfm: `--annotations '"hive.openshift.io/additional-log-fields={""HELLO"":""WORLD""}"'` HIVE-1966
In preparation for needing to plumb some data from the ClusterProvision into the invocations of the openshift-install command, add a new field to InstallManager containing the actual ClusterProvision object. This allows us to change a bunch of function prototypes to remove it as an argument, as those functions can get it from the InstallManager argument they already accept (or are methods on). HIVE-1966
Get rid of yellow squiggly lines in my IDE.
A previous commit introduced support for adding fields specified via the `hive.openshift.io/additional-log-fields` annotation to all (well, most) log lines dealing with the CR containing the annotation. This commit makes the `provision` pod subsume those values and - add them to log lines produced by install-manager commands running in that pod - append them to each line (including redacted ones) of the install log, overriding `component` to `installer` Note that, if the annotation is specified, log lines from hive will include `component=hive`; but we will configure the installer's `log-config.yaml` with `component=installer`. HIVE-1966
In preparation for supporting additional log fields for aggregation, factor logger setup for hiveutil subcommands into a common utility function so we can do that in one place. HIVE-1966
- The `hiveutil` executable is changed to look for and process an environment variable, `HIVE_ADDITIONAL_LOG_FIELDS`. This a JSON string representing a flat map of key/value pairs to be added to all log lines produced by `hiveutil` (and passed through to the installer for the same purpose). - Each instance where hive code invokes `hiveutil` via `Job`s is wired to copy such JSON from the `hive.openshift.io/additional-log-fields` annotation on the object on which it is operating, to the env of the `Job` that will invoke `hiveutil`. HIVE-1966
HIVE-1966
7775846
to
2effe85
Compare
Updated to cut the installer log config out and instead wrap it and decorate its logs on the fly. Live test checks out. Snippet from controller logs:
Snippet from installer logs:
|
/hold cancel ready for review |
/retest-required |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks solid and tests are thorough. 👍
/lgtm
@@ -30,6 +30,7 @@ func cleanupDNSZone(dynClient client.Client, cd *hivev1.ClusterDeployment, logge | |||
dnsZoneNamespacedName := types.NamespacedName{Namespace: cd.Namespace, Name: controllerutils.DNSZoneName(cd.Name)} | |||
if err := dynClient.Get(context.TODO(), dnsZoneNamespacedName, dnsZone); err != nil { | |||
logger.WithError(err).Error("error looking up managed dnszone") | |||
// TODO: Return the err?!? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice find.
log "github.com/sirupsen/logrus" | ||
) | ||
|
||
type AdditionalLogFieldHavinThing interface { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NIT: Would consider naming this something like AdditionalLogFieldAnnotatedObject
but I am completely fine with AdditionalLogFieldHavinThing
as it conveys the same info to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, not the most formal/professional name, but it's an internal thing.
I'd just like to note (cc @dlom) that this is another example of using generics to increase the #LOC.
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: 2uasimojo, abutcher The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/retest |
3 similar comments
/retest |
/retest |
/retest |
/test e2e |
@2uasimojo: all tests passed! Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
New version includes support for log decoration with extra fields (see openshift/hive#1828 for more details).
New version includes support for log decoration with extra fields For more details see: * Azure#2284 * openshift/hive#1828
New version includes support for log decoration with extra fields For more details see: * #2284 * openshift/hive#1828
Add support for a
hive.openshift.io/additional-log-fields
annotationon hive-owned CRs. If present, it is parsed as a JSON map containing
arbitrary key/value pairs which are included in all log lines for
controllers handling the CR. They are also appended to log lines from installer pods.
The assumption is that this is being used for log aggregation by an external consumer that needs to know which logs came from which component. Thus, if the value of the annotation is nonempty and parseable, we will add
component=hive
to whatever it contains when logging from controllers; andcomponent=installer
when decorating installer logs.PROVISOS [1]:
Our controllers currently reconcile on all and only the following
objects (so annotating others will have no effect):
The clusterpoolnamespace controller reconciles on Namespace; but we
do not support these annotations there.
Log fields are added to the logger for a given controller once it has
loaded the CR. So there may be a couple of log messages that will be
missed. They should be unimportant for consumers -- things like the
initial "Reconciling $namespace/$name".
The hive code generates CRs in certain circumstances.
created or deleted, respectively. They inherit the annotation at the
time they are generated. If you want the fields updated after that,
you must edit the ClusterProvision/ClusterDeprovision object directly.
managed DNS. The clusterdeployment controller will generate the
DNSZone with the annotation from the CD, but will also sync it from
the CD on each reconcile.
copy the annotation from the ClusterPool.Metadata; you would have to
put it into ClusterPool.Spec.Annotations instead.
hiveutil create cluster
. You can use the-a/--annotations
flag to initialize theClusterDeployment with the annotation [2]. The machinepool controller
will sync it from the CD to the MachinePool on each reconcile.
hiveutil clusterpool create-pool
orhiveutil clusterpool claim
. Inthe future we may wish to add
-a/--annotations
like we have forcreate cluster
.[1] https://www.youtube.com/watch?v=pb0PuaikL4w
[2] The CLI syntax is a bit tricky. This wfm:
--annotations '"hive.openshift.io/additional-log-fields={""HELLO"":""WORLD""}"'
HIVE-1966