-
Notifications
You must be signed in to change notification settings - Fork 1k
Refactor connection pooler #1159
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@RafiaSabih can you rebase your PR, so that it does not list unrelated commits? |
- Refactor code for connection pooler deployment and services - Refactor sync code for connection pooler - Rename EnableConnectionPooler to EnableMasterConnectionPooler - Update yamls and tests
- Update deleteConnectionPooler to include role - Rename EnableMasterConnectionPooler back to original name for backward compatiility - other minor chnages and code improvements
- Refactor needConnectionPooler for master and replica separately - Improve sync function - Add test cases to create, delete and sync with repplica connection pooler Other changes
- Fixed the issue with failing test cases - Add more test cases for replica connection pooler - Added docs about the new flag
Have one unified function to tell if any connection pooler is required Add a helper function to list the roles that require connection pooler, helps in avoiding duplication of code
…erator into pooler-refac
Current issues open for discussion,
One way could be to let the functions like createConnectionPooler, deleteConnectionPooler, updateConenctionPooler which are using Kubeclient in the cluster package itself, but then it kind of defeats the purpose of refactoring. |
pkg/cluster/cluster.go
Outdated
processMu sync.RWMutex // protects the current operation for reporting, no need to hold the master mutex | ||
specMu sync.RWMutex // protects the spec for reporting, no need to hold the master mutex | ||
|
||
ConnectionPooler map[PostgresRole]*connection_pooler.ConnectionPoolerObjects |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So this whole PR is not solely a refactoring but rather a code change that enables separate connection poolers for the master and replicas ?
pkg/cluster/cluster.go
Outdated
|
||
roles := c.RolesConnectionPooler() | ||
for _, r := range roles { | ||
c.logger.Warningf("found roles are %v", r) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why Warningf
and not Infof
? at this point there is technically nothing bad happening
pkg/cluster/cluster.go
Outdated
|
||
for _, r := range c.RolesConnectionPooler() { | ||
if c.ConnectionPooler[r] != nil { | ||
c.logger.Warning("Connection pooler already exists in the cluster") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be more descriptive here to log also the role for which the pooler exists, and the name of this pooler similar to how errors are processed below in this code
return nil | ||
} | ||
|
||
func syncResources(a, b *v1.ResourceRequirements) bool { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand this function was named like that in the original code, but judging from what it does, shouldSyncResources
or needToSyncResouces
is a better name for it from a readability perspective.
package pooler_interface | ||
|
||
//functions of cluster package used in connection_pooler package | ||
type pooler interface { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this interface meant to help resolving cyclic dependencies ?
pgPort = 5432 | ||
) | ||
|
||
// K8S objects that are belongs to a connection pooler |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typo: are belongs
-> belong
|
||
const ( | ||
// Master role | ||
Master PostgresRole = "master" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Role constants are already defined in pkg/cluster/types.go. Why are they re-defined here ?
I feel they are all caused by the fundamental circular dependency between We can try to resolve the dependency by making the
One can have the effective implementation of the interface in the |
@RafiaSabih could you update the PR description with
|
Kepp connectionPooler in cluster package itself. This helps in getting rid of previously added packages and interfaces. Based on the new structure of ConnectionPooler in cluster, there are functional changes to come. This commit is only for updating and cleaning of packages.
create and sync connectionPooler were doing essentially the same task with respect to creating new connectionPooler if not present. Hence, the code from create is already present in sync, which is now utilized and duplicate code is removed. Other code duplication is also removed in sync, for deciding upon when to delete the connectionPooler. Basically, anytime newNeedConnectionPooler is false, we have to delete it, if present. Other respective modifications in tests.
👍 |
👍 |
pkg/cluster/connection_pooler.go
Outdated
|
||
//delete connection pooler | ||
func (c *Cluster) deleteConnectionPooler(role PostgresRole) (err error) { | ||
//c.setProcessName("deleting connection pooler") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should it be commented out?
👍 |
Refactor connection pooler to be a map of PostgresRole, as now we require connection pooler for each service.