Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

s3 host should be 's3.amazonaws.com' in endpoint #494

Closed
JayVem opened this issue Dec 8, 2016 · 8 comments
Closed

s3 host should be 's3.amazonaws.com' in endpoint #494

JayVem opened this issue Dec 8, 2016 · 8 comments
Labels

Comments

@JayVem
Copy link

JayVem commented Dec 8, 2016

When testing file copy on s3, I see an error thrown as specified in the subject of this issue. I noticed in the code that there is a string pattern check for s3.amazonaws.com. This is incorrect expectation. If you see in this link , s3.us-east-2 is also a valid s3 region and so are many others. Please fix this - this especially fails when using minio client from inside a Amazon Lambda function (to which the access to the s3.us-east-2 is already available, since the lambda executes in the same region).

@harshavardhana
Copy link
Member

you can use s3.amazonaws.com directly, minio-java will figure out the right region using GetBucketLocation().

@balamurugana
Copy link
Member

balamurugana commented Dec 8, 2016

@JayVem The check s3.amazonaws.com is done to avoid minio-java consumer to know the region of the bucket.

Passing endpoint as s3.amazonaws.com to MinioClient is good enough to do any s3 operation. For example statObject(String bucketName, String objectName) automatically figures out the bucket region and makes virtual style rest call to Amazon S3.

Let me know if this works for you.

@JayVem
Copy link
Author

JayVem commented Dec 9, 2016

String amzHost = url.host(); if (amzHost.endsWith(AMAZONAWS_COM) && !amzHost.equals(S3_AMAZONAWS_COM)) { throw new InvalidEndpointException(endpoint, "for Amazon S3, host should be 's3.amazonaws.com' in endpoint"); }

It doesn't work directly with s3.amazonaws.com - I receive an error in Amazon lambda saying that I should use the actual url. As you can see from line 443 in MinioClient.java constructor, the code is specifically looking for s3.amazonaws.com and when I give it s3.us-east-2.amazonaws.com as endpoint, it throws exception.

@harshavardhana
Copy link
Member

It doesn't work directly with s3.amazonaws.com - I receive an error in Amazon lambda saying that I should use the actual url. As you can see from line 443 in MinioClient.java constructor, the code is specifically looking for s3.amazonaws.com and when I give it s3.us-east-2.amazonaws.com as endpoint, it throws exception.

what is the error that you get in lambda @JayVem ?

@balamurugana
Copy link
Member

@JayVem Do you mean you have to use s3.<region>.amazonaws.com inside your lambda function to avoid making additional location query call?

FYI minio-java has inbuilt region cache ie many different S3 calls on same bucket uses only one location query call.

@JayVem
Copy link
Author

JayVem commented Dec 12, 2016

ok, So here are the issues, when I use credentials given by Lambda environment with code like this -

`private String awsKey = System.getenv("AWS_ACCESS_KEY_ID");
private String awsSecret = System.getenv("AWS_SECRET_ACCESS_KEY");

public Object handleRequest(Object request, Context context) {
	APDefaultS3Client s3Client = new APDefaultS3Client(awsKey, awsSecret);
	try {
		s3Client.download("test_download.txt", "dev-tmp", "/tmp/test.txt");
	} catch (AWSS3Exception e) {
		// TODO Auto-generated catch block
		e.printStackTrace();
	}
	return null;
}`

I receive an error like this -
ErrorResponse(code=InvalidAccessKeyId, message=The AWS Access Key Id you provided does not exist in our records., bucketName=null, objectName=null, resource=null, requestId=F1B13237A4206A37, hostId=....=) request={method=GET, url=https://s3.amazonaws.com/dev-tmp?location=, headers=Host: s3.amazonaws.com User-Agent: Minio (amd64; amd64) minio-java/dev

Now, when I try with no credentials specified (which I thought should work, given the fact that the lambda has access to the s3 bucket) with this code -

public Object handleRequest(Object request, Context context) { APDefaultS3Client s3Client = new APDefaultS3Client(); try { s3Client.download("test_download.txt", "dev-tmp", "/tmp/test.txt"); } catch (AWSS3Exception e) { // TODO Auto-generated catch block e.printStackTrace(); } return null; }
`
then I receive the following error

error occured ErrorResponse(code=AccessDenied, message=Access denied, bucketName=dev-tmp, objectName=test_download.txt, resource=/test_download.txt, requestId=97A2823B45EDF297, hostId=PbsJZe8Kh1VTv3yGXez3pyEco4hhrvQOC1mObFMU+cQQqiz0wJSFr6ZiQ4IkGbiaaEXhomIINHE=) request={method=HEAD, url=https://dev-tmp.s3.amazonaws.com/test_download.txt, headers=Host: dev-tmp.s3.amazonaws.com User-Agent: Minio (amd64; amd64) minio-java/dev

Please note in both cases the server url was s3.amazon.com. If I use anything else, then the constructor throws the exception mentioned in the subject line.

@balamurugana
Copy link
Member

@JayVem Could you check whether

  • Given access/secret keys work good outside of your program or lambda function with Amazon AWS S3
  • Please verify read keys from OS environment is exactly same as the one set in the OS environment variables.

@harshavardhana
Copy link
Member

Closing this bug as stale.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants