Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HDDS-3021. Ozone S3 CLI path command not working on HA cluster. #553

Closed
wants to merge 3 commits into from

Conversation

bharatviswa504
Copy link
Contributor

What changes were proposed in this pull request?

ozone s3 path <>

ozone s3 path are not working on OM HA cluster

Fix these commands to work on OM HA cluster.

What is the link to the Apache JIRA

https://issues.apache.org/jira/browse/HDDS-3021

How was this patch tested?

Added UT and also tested on the docker-compose om-ha-s3 cluster.

bash-4.2$ ozone s3 path b12345 --om-service-id=id1
Volume name for S3Bucket is : s34124bc0a9335c27f086f24ba207a4912
Ozone FileSystem Uri is : o3fs://b12345.s34124bc0a9335c27f086f24ba207a4912

Copy link
Member

@elek elek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 LGTM

Tested with ozone-om-ha-s3 AND ozonesecure docker compose and the path command worked well.

Thanks the patch @bharatviswa504 Will merge it soon

BTW. One (independent) question: Why do I need to set the om service id if it's already known from the config?

bash-4.2$ ozone s3 path bucket1
Service ID must not be omitted when ozone.om.service.ids is defined. Configured ozone.om.service.ids are [id1]
bash-4.2$ ozone s3 path bucket1 --om-service-id id1
Volume name for S3Bucket is : s3c89e813c80ffcea9543004d57b2a1239
Ozone FileSystem Uri is : o3fs://bucket1.s3c89e813c80ffcea9543004d57b2a1239

@elek elek closed this in 02b3925 Mar 4, 2020
@bharatviswa504
Copy link
Contributor Author

If conf has a service ID, but it is of the remote cluster. (I think this can happen when conf has remote HA cluster details and the local non-HA cluster) . So, in HA for all commands, we have mandated to pass the serviceID, instead of best effort.

And also one more scenario can be in the future, when we have federated OM, with multiple serviceIDs like HDFS, even in that case these commands can still work, as passing explicitly serviceID is mandatory.

@adoroszlai
Copy link
Contributor

Thanks @bharatviswa504 for the explanation.

So, in HA for all commands, we have mandated to pass the serviceID, instead of best effort.

But this also means the same command does not work in both non-HA and HA case.

My current concern is that smoke tests need to be aware of the environment they are run in. Eg. I wanted to add a test to verify behavior of ozone s3 getsecret command, but it needs to distinguish between ozone and ozone-om-ha-s3 environments (as well as ozonesecure, but we already have an env. variable for that).

@elek
Copy link
Member

elek commented Mar 5, 2020

If conf has a service ID, but it is of the remote cluster. (I think this can happen when conf has remote HA cluster details and the local non-HA cluster) . So, in HA for all commands, we have mandated to pass the serviceID, instead of best effort.

Thanks to explain it. Just for the usability I propose to use the serviceID from the config file if it's set:

If serviceId is not specified but there is exactly one in the configuration: use that one.

I think it provides better usability (and it can also solve the testing problem of @adoroszlai )

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants