Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clean logging #220

Merged
merged 4 commits into from
Dec 16, 2021
Merged

Clean logging #220

merged 4 commits into from
Dec 16, 2021

Conversation

deitch
Copy link
Contributor

@deitch deitch commented Dec 14, 2021

This has two commits.

The first restructures some of the files and funcs without changing any functionality. It just reduces duplicate code, makes it easier to understand, separates some things into their own file, etc.

The second cleans up the logging:

  • do not return an error for something that isn't actually an error. If it is informative, log a message (at various log levels), but return nil
  • if we have a non-fatal error in a loop, log the message, save the affected resource in a slice, and complete the loop. Only at the end should we return the error.

I have wanted to do this for a long time.

@@ -224,7 +226,8 @@ func newControlPlaneEndpointManager(eipTag, projectID string, deviceIPSrv packng
// the `default/kubernetes` service
func (m *controlPlaneEndpointManager) reconcileServices(ctx context.Context, svcs []*v1.Service, mode UpdateMode) error {
if m.eipTag == "" {
return errors.New("elastic ip tag is empty. Nothing to do")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm hitting this error in a cluster I just brought up under k8s 1.23, haven't looked into why yet.
Can you explain the change in behavior here?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(why is this not an error)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah. The way it is defined, if you set an EIP tag, then you want CPEM to manage it; if you do not set an EIP tag, then you are telling CPEM, "there is nothing for you to manage at the control plane EIP level." Lack of EIP tag = "nothing to do here". That definitely is not an error, and never should have been treated as such.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't ask how many people that led astray. 🤦‍♂️

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I sort of did "Nothing to do." as an indication that this is not an error, just nothing that needs to be done here. The correct answer to that is, "so don't make it an error!" 😬

Signed-off-by: Avi Deitcher <avi@deitcher.net>
…ate before returning

Signed-off-by: Avi Deitcher <avi@deitcher.net>
@deitch
Copy link
Contributor Author

deitch commented Dec 15, 2021

Rebased.

displague
displague previously approved these changes Dec 15, 2021
Copy link
Member

@displague displague left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me - the errors that are returned vs logged make sense.

The watcher movement to a different file looks fine - perhaps we could move this to an internal package in the future or implement LoadBalancers (see my comment).

metal/loadbalancers.go Outdated Show resolved Hide resolved
metal/loadbalancers.go Outdated Show resolved Hide resolved
)

// createNodesWatcher create a cache.SharedIndexInformation for node handlers
func createNodesWatcher(ctx context.Context, informer informers.SharedInformerFactory, cloudServices []cloudService) (cache.SharedIndexInformer, error) {
Copy link
Member

@displague displague Dec 15, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we were to implement LoadBalancer, we would get these node/service watchers/informers for free.
I don't know if kube-vip or metallb register themselves as cloud-provider LoadBalancers (what does that really mean? I don't know.), I presume they do not. If they do I would be curious to know what users would provide for Service loadBalancerClass https://kubernetes.io/docs/concepts/services-networking/service/#load-balancer-class (would it be cloud-provider-equinix-metal, kube-vip, metallb).

Why do we not implement LoadBalancer?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should. It is on my list of todos, to which I haven't gotten yet. Your work on InstancesV2() also inspired me to think about getting moving on that. If it simplifies our work, even better.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But, like moving watchers.go into an internal package, subject for another PR.

Co-authored-by: Marques Johansson <mjohansson@equinix.com>
Co-authored-by: Marques Johansson <mjohansson@equinix.com>
Signed-off-by: Avi Deitcher <avi@deitcher.net>
@deitch
Copy link
Contributor Author

deitch commented Dec 15, 2021

CI failed because of gofmt issues. Fixed and pushed.

@deitch
Copy link
Contributor Author

deitch commented Dec 15, 2021

CI green, ready for you @displague

@displague displague merged commit 1c4d78d into master Dec 16, 2021
@deitch deitch deleted the clean-logging branch December 16, 2021 15:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants