-
Notifications
You must be signed in to change notification settings - Fork 0
Add connection pools #272
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add connection pools #272
Conversation
| } | ||
| } else { | ||
| err := s.client.Push(job) | ||
| err := s.helper.With(func(cl *faktory.Client) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The go faktory worker creates a connection pool already! We just needed to use it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
slick
| config, _ := GetKubernetesConfig() | ||
| client, _ := GetKubernetesClientset() | ||
| // kubernetes.Clientset is thread-safe and designed to be shared across goroutines | ||
| config, client, _ := GetSharedK8sClient() // Already validated by LoadK8SClient |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The removed comments were in general right about sharing clients. The client is threadsafe, however we just need some synchronization around initializing it, as we load the k8s client per NewJobRunner.
We don't manipulate the data from the clients at all so this should be fine.
c6446b8 to
031d94d
Compare
| } | ||
| } else { | ||
| err := s.client.Push(job) | ||
| err := s.helper.With(func(cl *faktory.Client) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
slick
| k8sClientOnce.Do(func() { | ||
| sharedK8sConfig, k8sInitError = GetKubernetesConfig() | ||
| if k8sInitError != nil { | ||
| return |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| return | |
| return nil, nil, k8sInitError |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this return is for the anonymous function from Do() - so i think the empty return is correct here.
| if k8sInitError != nil { | ||
| return nil, nil, k8sInitError | ||
| } | ||
| if _, err := GetKubernetesClientset(); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this method is no longer used if we wanna 🔪
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🔪
Issues
We have an issue where we're using a faktory go worker manager, but not using the connection pool. So we create N connections and hold on to them, but then also open new connection ad-hoc. This is the reason we've bumped into connection limit errors.
Demo:
Running the old + new side by side, we have a stable N + 1 connections, vs uncapped essentially.
Work done:
K8s Connection Pooling (src/pkg/k8s.go)
Faktory Connection Pooling
src/pkg/faktoryRunnerAppendJobLogProcessor.go:
src/pkg/faktorySetOutcomeProcessor.go:
Changelog
changieentryTophatting