Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Document how to work around unknown AWS regions in S3 driver #104

Closed
bparees opened this issue Jun 14, 2018 · 10 comments
Closed

Document how to work around unknown AWS regions in S3 driver #104

bparees opened this issue Jun 14, 2018 · 10 comments
Labels
kind/documentation Categorizes issue or PR as related to documentation.

Comments

@bparees
Copy link
Contributor

bparees commented Jun 14, 2018

much like k8s has done, we (well the upstream s3 filesystem driver) should trust the region list from AWS instead of us hardcoding it:
kubernetes/kubernetes#38880

that way when new regions are introduced, we automatically support them.

@legionus @dmage any reason this is a bad idea or hard to do?

Edit: see comment #104 (comment) which summarizes why we can't easily do this and describes a workaround we should document for users that allows them to bypass this check entirely.

@legionus
Copy link
Contributor

legionus commented Jun 15, 2018

I think that such complex changes should be made only in the upstream, and we should get it with the next version (or backport of merged upstream PR). Otherwise we will endup with really complicate carry patch.

@bparees
Copy link
Contributor Author

bparees commented Jun 15, 2018

I think that such complex changes should be made only in the upstream, and we should get it with the next version (or backport of merged upstream PR). Otherwise we will endup with really complicate carry patch.

Agree, but we may have to be the ones to submit the upstream PR.

@legionus
Copy link
Contributor

Agree, but we may have to be the ones to submit the upstream PR.

After investigation: docker/distribution has a list of regions to check the region at the time of construction of s3 storage provider and prevent the registry from starting with the wrong configuration. At this point, there is still no metadata. If we remove this check to later check the metadata, we will allow the registry launch with an invalid configuration, which will be a regression for the upstream. The comment is not relative for docker/distribution. We can't trust to region if we get it from configuration.

If we send a PR that removes this check at the start, I'm pretty sure that such a change will be rejected.

Also, this check is optional and it's happens only when the regionendpoint is not specified in the configuration. For example you can specify it to http://s3.amazonaws.com (but in this case you will need v4auth=true too) and you can use any region you want. If the region is wrong, you will get an error at the time of pull/push:

time="2018-06-18T14:14:26.738508683Z" level=error msg="response completed with error" err.code=unknown err.detail="s3aws: AuthorizationHeaderMalformed: The authorization header is malformed; the region 'us-east-666' is wrong; expecting 'us-east-1' status code: 400, request id: 57D7DFCDF71F42A9" http.response.status=500

@bparees
Copy link
Contributor Author

bparees commented Jun 19, 2018

Also, this check is optional and it's happens only when the regionendpoint is not specified in the configuration. For example you can specify it to http://s3.amazonaws.com (but in this case you will need v4auth=true too) and you can use any region you want. If the region is wrong, you will get an error at the time of pull/push

So we should have provided that workaround to the user who recently hit this? Perhaps it's worth documenting here: https://docs.openshift.org/latest/install_config/registry/extended_registry_configuration.html#docker-registry-configuration-reference-storage ?

That seems like a sufficient resolution to this issue.

@legionus
Copy link
Contributor

So we should have provided that workaround to the user who recently hit this?

You tell me. I just follow the directions.

Perhaps it's worth documenting here

If you think that it's worth it, then we can.

@bparees
Copy link
Contributor Author

bparees commented Jun 19, 2018

You tell me. I just follow the directions.

well it's largely moot now since the fix will be in their hands shortly and I think they found another workaround in the meantime. I was mostly asking if, in hindsight, you think this would have been a reasonable workaround for them if we'd known about it at the time. If the answer is yes, then I think it is worth documenting it so we don't forget about it the next time it comes up (e.g. when AWS introduces another new region).

@legionus
Copy link
Contributor

I think it is worth documenting it so we don't forget about it the next time it comes up (e.g. when AWS introduces another new region).

Yes. As a quick fix it should help, but in long term we should add new region and backport it to allow this check to work again. So in general for us nothing will change.

@bparees
Copy link
Contributor Author

bparees commented Jun 20, 2018

Yes. As a quick fix it should help, but in long term we should add new region and backport it to allow this check to work again. So in general for us nothing will change.

yes, we're going to have to keep adding new regions as they show up.

@bparees bparees changed the title AWS: trust region if found from AWS metadata Document how to work around unknown AWS regions in S3 driver Jun 20, 2018
@bparees
Copy link
Contributor Author

bparees commented Jun 20, 2018

i've updated the title to reflect this is a documentation effort

@dmage dmage added the kind/documentation Categorizes issue or PR as related to documentation. label Jul 30, 2018
@dmage
Copy link
Contributor

dmage commented Feb 26, 2019

@dmage dmage closed this as completed Feb 26, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/documentation Categorizes issue or PR as related to documentation.
Projects
None yet
Development

No branches or pull requests

3 participants