Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OADP-205: Implement registry inside openshift-velero-plugin #145

Merged
merged 11 commits into from
Jul 8, 2022

Conversation

kaovilai
Copy link
Member

@kaovilai kaovilai commented May 24, 2022

Design doc
openshift/oadp-operator#737
OADP Implementaiton
openshift/oadp-operator#743

To test this PR, you will need to install operator from openshift/oadp-operator#743
with the following command

operator-sdk run bundle quay.io/tkaovila/oadp-operator-bundle:velero-plugin-registry --namespace openshift-adp

Use dpa with unsupportedOverrides

apiVersion: oadp.openshift.io/v1alpha1
kind: DataProtectionApplication
metadata:
  name: velero-sample
spec:
  unsupportedOverrides:
    openshiftPluginImageFqin: "quay.io/tkaovila/openshift-velero-plugin:velero-plugin-registry" 

Notable changes:

if imagecopy.UsePluginRegistry(){
var err error
ut, err = GetUdistributionTransportForLocation(backupLocation.GetUID(), backupLocation.Spec.StorageLocation, backupLocation.Namespace)
if err != nil {
return nil, err
}
annotations[common.MigrationRegistry] = fmt.Sprintf("%s%s", imagecopy.BSLRoutePrefix, GetUdistributionKey(backupLocation.Spec.StorageLocation, backupLocation.Namespace))
} else {

func GetUdistributionTransportForLocation(uid k8stypes.UID, location, namespace string) (*udistribution.UdistributionTransport, error) {
if ut, found := udistributionTransportForLocation[uid]; found && ut != nil {
return ut, nil
}
envs, err := GetRegistryEnvsForLocation(location, namespace)
if err != nil {
return nil, err
}
ut, err := udistribution.NewTransportFromNewConfig("", envs)
if err != nil {
return nil, err
}
udistributionTransportForLocation[uid] = ut // cache the transport
return ut, nil
}

Setting transport name for containers/image/copy

const dockerTransport = "docker://"
var (
srcPath = ""; destPath = ""
)
if strings.HasPrefix(o.SrcRegistry, BSLRoutePrefix) {
if o.Ut == nil {
return errors.New("udistribution transport not found")
}
o.Log.Info(fmt.Sprintf("[imagecopy] copying image from BSL registry: %s", o.Ut.Name()))
srcPath += o.Ut.Name() + "://"
o.SrcRegistry = strings.TrimPrefix(o.SrcRegistry, BSLRoutePrefix)
} else {
srcPath += dockerTransport
}
if strings.HasPrefix(o.DestRegistry, BSLRoutePrefix) {
if o.Ut == nil {
return errors.New("udistribution transport not found")
}
o.Log.Info(fmt.Sprintf("[imagecopy] copying image to BSL registry: %s", o.Ut.Name()))
destPath += o.Ut.Name() + "://"
o.DestRegistry = strings.TrimPrefix(o.DestRegistry, BSLRoutePrefix)
} else {
destPath += dockerTransport
}

OADP-205

@openshift-ci openshift-ci bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label May 24, 2022
@kaovilai kaovilai changed the title Velero-plugin-registry Implement registry inside openshift-velero-plugin Jun 21, 2022
remove uid from internalRegistrySystemContext discovery function

fix bslNameForBackup map

reuse common.GetBackup

get rid of for loop in GetBackup

call rest.InClusterConfig() only once

wip plugin registry

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

wip

Use strconv.ParseBool

bump logrusr to v3 to accomodate logr v1.2

improve imagestream logging

resource name fix

more logs

is-restore typo

more log

error strings for registryenv

make space for decodedByte

error returns secret name and encoded data

data already decoded?

add logger to GetUdistributionTransportForLocation

fix getRegistryEnvsForLocation

make map
@kaovilai

This comment was marked as resolved.

@kaovilai

This comment was marked as outdated.

@kaovilai

This comment was marked as resolved.

@kaovilai
Copy link
Member Author

Resolved outstanding issues.
Backup and restore works as intended at least for single image manifest

time="2022-06-29T20:36:31Z" level=info msg="[imagecopy] Copying tag: \"oadp-registry\"" cmd=/plugins/velero-plugins logSource="/opt/app-root/pkg/mod/github.com/bombsimon/logrusr/v3@v3.0.0/logrusr.go:108" pluginName=velero-plugins restore=openshift-adp/backup-rst
time="2022-06-29T20:36:31Z" level=info msg="[imagecopy] copying image from BSL registry: udistribution-s3-f85eb12f-7a1f-47bf-a89f-39a8a24ef7ab" cmd=/plugins/velero-plugins logSource="/opt/app-root/pkg/mod/github.com/bombsimon/logrusr/v3@v3.0.0/logrusr.go:108" pluginName=velero-plugins restore=openshift-adp/backup-rst
time="2022-06-29T20:36:31Z" level=info msg="[imagecopy] copying from: udistribution-s3-f85eb12f-7a1f-47bf-a89f-39a8a24ef7ab://imagestream-test/python-sample@sha256:487730a3e642d07ce5a6ff11eebccec47cc7a0bd599727c6bc1b1ff129f4fa64" cmd=/plugins/velero-plugins logSource="/opt/app-root/pkg/mod/github.com/bombsimon/logrusr/v3@v3.0.0/logrusr.go:108" pluginName=velero-plugins restore=openshift-adp/backup-rst
time="2022-06-29T20:36:31Z" level=info msg="[imagecopy] copying to: docker://image-registry.openshift-image-registry.svc:5000/imagestream-test/python-sample:oadp-registry" cmd=/plugins/velero-plugins logSource="/opt/app-root/pkg/mod/github.com/bombsimon/logrusr/v3@v3.0.0/logrusr.go:108" pluginName=velero-plugins restore=openshift-adp/backup-rst
time="2022-06-29T20:36:31Z" level=info msg="copying image: udistribution-s3-f85eb12f-7a1f-47bf-a89f-39a8a24ef7ab://imagestream-test/python-sample@sha256:487730a3e642d07ce5a6ff11eebccec47cc7a0bd599727c6bc1b1ff129f4fa64; will attempt up to 7 times..." cmd=/plugins/velero-plugins logSource="/opt/app-root/pkg/mod/github.com/bombsimon/logrusr/v3@v3.0.0/logrusr.go:108" pluginName=velero-plugins restore=openshift-adp/backup-rst
time="2022-06-29T20:36:32Z" level=info msg="[imagecopy] Copying tag: \"placeholder\"" cmd=/plugins/velero-plugins logSource="/opt/app-root/pkg/mod/github.com/bombsimon/logrusr/v3@v3.0.0/logrusr.go:108" pluginName=velero-plugins restore=openshift-adp/backup-rst
time="2022-06-29T20:36:32Z" level=info msg="[imagecopy] copying image from BSL registry: udistribution-s3-f85eb12f-7a1f-47bf-a89f-39a8a24ef7ab" cmd=/plugins/velero-plugins logSource="/opt/app-root/pkg/mod/github.com/bombsimon/logrusr/v3@v3.0.0/logrusr.go:108" pluginName=velero-plugins restore=openshift-adp/backup-rst
time="2022-06-29T20:36:32Z" level=info msg="[imagecopy] copying from: udistribution-s3-f85eb12f-7a1f-47bf-a89f-39a8a24ef7ab://imagestream-test/python-sample@sha256:b170df4fefd479a44c3617a7909a37a9343cbc47ad1ce319c436dc20f0ea7855" cmd=/plugins/velero-plugins logSource="/opt/app-root/pkg/mod/github.com/bombsimon/logrusr/v3@v3.0.0/logrusr.go:108" pluginName=velero-plugins restore=openshift-adp/backup-rst
time="2022-06-29T20:36:32Z" level=info msg="[imagecopy] copying to: docker://image-registry.openshift-image-registry.svc:5000/imagestream-test/python-sample:placeholder" cmd=/plugins/velero-plugins logSource="/opt/app-root/pkg/mod/github.com/bombsimon/logrusr/v3@v3.0.0/logrusr.go:108" pluginName=velero-plugins restore=openshift-adp/backup-rst
time="2022-06-29T20:36:32Z" level=info msg="copying image: udistribution-s3-f85eb12f-7a1f-47bf-a89f-39a8a24ef7ab://imagestream-test/python-sample@sha256:b170df4fefd479a44c3617a7909a37a9343cbc47ad1ce319c436dc20f0ea7855; will attempt up to 7 times..." cmd=/plugins/velero-plugins logSource="/opt/app-root/pkg/mod/github.com/bombsimon/logrusr/v3@v3.0.0/logrusr.go:108" pluginName=velero-plugins restore=openshift-adp/backup-rst
time="2022-06-29T20:36:34Z" level=info msg="[imagecopy] copied at least one local image: true" cmd=/plugins/velero-plugins logSource="/opt/app-root/pkg/mod/github.com/bombsimon/logrusr/v3@v3.0.0/logrusr.go:108" pluginName=velero-plugins restore=openshift-adp/backup-rst
time="2022-06-29T20:36:34Z" level=info msg="[imagecopy] copied at least one local image by tag: true" cmd=/plugins/velero-plugins logSource="/opt/app-root/pkg/mod/github.com/bombsimon/logrusr/v3@v3.0.0/logrusr.go:108" pluginName=velero-plugins restore=openshift-adp/backup-rst
time="2022-06-29T20:36:34Z" level=info msg="Skipping restore of ImageStream: python-sample because a registered plugin discarded it" logSource="pkg/restore/restore.go:1156" restore=openshift-adp/backup-rst

@kaovilai kaovilai marked this pull request as ready for review June 29, 2022 20:42
@openshift-ci openshift-ci bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Jun 29, 2022
github.com/hashicorp/go-plugin v1.3.0 // indirect
github.com/openshift/api v0.0.0-20210105115604-44119421ec6b
github.com/kaovilai/udistribution v0.0.5
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this point to somewhere other than your personal repo?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess that's where the project lives right now -- i.e. it's not a fork. Should it be moved to openshift and/or konveyor though, since it's to be part of a supported product?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 we should move this to Konveyor at some point. Indifferent on when right now. Can live in kaovilai until another project uses it.

Copy link
Contributor

@sseago sseago left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@kaovilai
Copy link
Member Author

kaovilai commented Jul 7, 2022

Just noticed (again?).. currently registry secret the plugin is fetching from is created from OADP-Operator. Could also move secret parsing to openshift-velero-plugin if wanted in the future.

} else {
annotations[common.MigrationRegistry] = backupRegistryRoute
backupRegistryRoute, err := getOADPRegistryRoute(backup.GetUID(), backup.Namespace, backup.Spec.StorageLocation, common.RegistryConfigMap)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kaovilai We do not need to fetch the route for OADP, right ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is for backwards compatibility such as if we make 1.0.4 release that do not activate the plugin-registry behavior.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We shouldn't need to worry about this since a 1.0.4 would be built off oadp-1.0 branch. It doesn't hurt to have this but I would be in favor of removing it.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok removing.

Copy link
Member

@shubham-pampattiwar shubham-pampattiwar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome Job @kaovilai, Thank you ! Added some comments.

@dymurray dymurray changed the title Implement registry inside openshift-velero-plugin OADP-205: Implement registry inside openshift-velero-plugin Jul 7, 2022
Copy link
Member

@dymurray dymurray left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall LGTM, added a couple comments but nothing blocking merge

@dymurray
Copy link
Member

dymurray commented Jul 8, 2022

@kaovilai I do like the idea of enhancing this to do the parsing in the plugin rather than the operator. Something to explore as an enhancement.

@kaovilai
Copy link
Member Author

kaovilai commented Jul 8, 2022

manual test on aws passing. merging now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants