Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can we configure s3 bucket per pg cluster? #1209

Closed
raja-gola opened this issue Nov 12, 2020 · 1 comment · Fixed by #1794
Closed

Can we configure s3 bucket per pg cluster? #1209

raja-gola opened this issue Nov 12, 2020 · 1 comment · Fixed by #1794
Labels

Comments

@raja-gola
Copy link

Can we configure s3 bucket per pg cluster?

Currently the s3 bucket for WAL_S3_BUCKET and LOGICAL_BACKUP_S3_BUCKET come from operator config.

if c.OpConfig.WALES3Bucket != "" {
		envVars = append(envVars, v1.EnvVar{Name: "WAL_S3_BUCKET", Value: c.OpConfig.WALES3Bucket})
		envVars = append(envVars, v1.EnvVar{Name: "WAL_BUCKET_SCOPE_SUFFIX", Value: getBucketScopeSuffix(string(uid))})
		envVars = append(envVars, v1.EnvVar{Name: "WAL_BUCKET_SCOPE_PREFIX", Value: ""})
}
               { 
			Name:  "LOGICAL_BACKUP_S3_BUCKET",
			Value: c.OpConfig.LogicalBackup.LogicalBackupS3Bucket,
		},

We need to configure bucket per cluster. I tried to define them in pod_environment_configmap but looks like operator config takes precedence.

        // add vars taken from pod_environment_configmap and pod_environment_secret first
	// (to allow them to override the globals set in the operator config)
	if len(customPodEnvVarsList) > 0 {
		envVars = append(envVars, customPodEnvVarsList...)
	}

Is there anyway we could configure s3 bucket per cluster and overwrite s3 bucket defined in operator config??

@Jan-M
Copy link
Member

Jan-M commented Nov 13, 2020

Seems you looked into this already, judging by that I would assume probably not, if code or docs don't show how.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants