google/storage uses a legacy Amazon-compatible authentication system that still works, but has some limitations and requires some hackery to get working in a non-trivial case. It looks for the parameters :google_storage_access_key_id and :google_storage_secret_access_key
google/compute embraces the newer service account model, and accepts :google_project, google_client_email, :google_key_location, :google_key_string and :google_client
Instances provisioned on Google Compute Engine can be authorized at launch time with service_account_scopes, which preauthorize the instance on various Google OAuth scopes, e.g.: https://www.googleapis.com/auth/devstorage.full_control -- once this is done, a GET query to the Google metadata server from that instance will return a valid token for the service for that instance scoped to its own project -- no other service accounts required.
I would propose:
expanding google/storage's vocabulary to accept the same service account parameters as google/compute
expanding the vocabulary of google/compute to allow service_account_scopes to be set at instance launch time
adding a parameter to both google/compute and google/storage to attempt using an OAuth token from the metadata service if fog is running on a preauthorized instance
This would allow a fog user to provision a Compute Engine node using fog and a provisioning service account, preauthorize that node to connect to Cloud Storage (and/or other Google OAuth scopes), and then have that node be able to run and interact with Cloud Storage, Datastore, etc. without needing to be issued its own unique service account.
I can work on this and it doesn't look too terribly difficult, but I haven't contributed to fog before and this is really my first time looking at its internals. Before I waste too much effort, does this all sound worthwhile, and is there anyone actively maintaining the google stuff that I can coordinate with?
@icco could you review/comment?
is being worked on.
I've never used it, but users have been using https://github.com/fog/fog/blob/master/lib/fog/google/models/compute/server.rb#L23 to get service accounts. If you read https://github.com/fog/fog/blob/master/lib/fog/google/compute.rb#L930, you can actually pass in your own auth client with the service scopes defined.
this is dependent on 1.
Storage has been largely ignored because it has been stable. Now that the new API has been blessed, we need to rewrite the entire service to use the new auth and api. I just haven't had the time to do this.
I act as the main maintainer for the Google folder, and have pulled in a bunch of random people to contribute on the side, since this isn't any of our full time jobs. One of my coworkers was given an intern who starts soon and will be working on doing the storage upgrade, so my goal is to have all of the issues you mention fixed by the end of the summer. Because yes, the services should play nice, and right now they don't.
Awesome to hear, @icco - let me know what I can do to help. I may work on a hacky version of the storage upgrade in my own fork just to get something usable in the immediate term for a project, and I can collaborate with the new intern however needed over the summer.
@rfc2616, it sounds like the interns won't be working on this :( As such, if you're still interested in the work, we'd love to have your contributions.
OK. I'll get rolling on it.
To be clear, interns are not working on 1. AFAIK, 2 and 3 (if they're not already working), will be soon.
So 2 and 3 would be working (or soon will be working) in compute, but the outstanding task would be to get storage to behave similarly. And, I guess, to try to preserve the current behavior so that apps using legacy keys don't suddenly break.
Correct. I think 1 and 3 for storage will probably be the hardest, because it will require switching to the "new" api.
I'm closing this issue in favor of fog/fog-google#38.
If you have something to share, please do so there. Thanks!