Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sync admin passwords at cluster setup finish #1803

Merged
merged 1 commit into from Feb 12, 2019

Conversation

Projects
None yet
3 participants
@jaydoane
Copy link
Contributor

jaydoane commented Dec 10, 2018

Overview

Sync admin passwords at cluster setup finish

Testing recommendations

I tested by changing this line to remove the hashify function like so:

    return contents + "\n%s = %s" % (user, pswd)

and then started couchdb normally: dev/run -a adm:pass".

In a remsh I confirmed that all admin passwords are hashed the same after setup:

(node1@127.0.0.1)1> rpc:multicall(config, get, ["admins"]).
{[[{"adm",
    "-pbkdf2-454cf868445b52a1d7a527c20c1622925c8fab3b,561a3a73c5ae6276a2cb0fc47434975d,10"}],
  [{"adm",
    "-pbkdf2-454cf868445b52a1d7a527c20c1622925c8fab3b,561a3a73c5ae6276a2cb0fc47434975d,10"}],
  [{"adm",
    "-pbkdf2-454cf868445b52a1d7a527c20c1622925c8fab3b,561a3a73c5ae6276a2cb0fc47434975d,10"}]],
 []}

Related Issues or Pull Requests

Checklist

  • Code is written and works correctly;
  • Changes are covered by tests;
  • Documentation reflects the changes;
@wohali

This comment has been minimized.

Copy link
Member

wohali commented Dec 10, 2018

Hi @jaydoane ,

This doesn't fix the problems in #1781.

Please read the use cases in #1781 (comment) and comment (either here or there).

I am also not sure you want to use the exact same salt for all newly created admins - that would reduce security, unless I'm misunderstanding the patch?

@wohali

This comment has been minimized.

Copy link
Member

wohali commented Dec 12, 2018

@jaydoane do we still want to merge this, even though it only addresses one specific scenario discussed in #1781?

If so, please remove the "Fixes #1781" from the description, as it isn't a complete solution. I'd also like a +1 from someone more familiar with the code than I before merging.

@jaydoane

This comment has been minimized.

Copy link
Contributor Author

jaydoane commented Dec 19, 2018

@wohali writes:

do we still want to merge this, even though it only addresses one specific scenario discussed in #1781?

I think I would put it somewhat differently. To summarize, (and ignoring single-node config), there are basically two scenarios in which one can currently use the cluster setup code: both ways start with an "enable_cluster" request to the coordinating node, and both end with a "finish_cluster" request to the coordinating node. The scenarios only differ in the following ways:

  1. In the "direct" scenario, (which appears to have been the first implemented), the client makes an "enable_cluster" request directly to each additional non-coordinating node in the cluster, followed by an "add_node" request to the coordinating node for the node being added.
  2. In the "remote" scenario, the client instead makes a "remote enable_cluster" request and a subsequent "add_node" request to the coordinating node for the node being added.

The key difference between the two is that in scenario 2, instead of directly making a request to the new nodes, the client makes requests to the coordinator, and then the coordinator makes requests to the new nodes. The main benefit of this approach is that the coordinator sends the admin password_hash to the new nodes rather than the password itself, so that the hashes are the same on all nodes.

However, when this patch runs during the "finish_cluster" request, it also syncs the password hash from the coordinator to the other nodes in the cluster, and so scenario 1 also enjoys the benefits of synced admin password hashes (and is also redundant to how scenario 2 does it). Additionally, it creates a cluster-wide password salt (if one has not already been pre-defined in a config file) and syncs that, so that any subsequent admin user that gets added to the cluster will also have the same hashed password.

It's that last bit that I thought was originally being requested, but it now sounds like there is a question about whether it's a good idea to salt all admin passwords with the same salt. If we decide that doing so is ok, then I think merging this patch will improve admin password administration overall. Otherwise, I'm fine to drop it and close this PR. I'm also happy to remove the "Fixes #1781" part, although I'm not exactly sure how, other than not clarifying the documentation, this doesn't fix inconsistent admin password hashes during cluster setup.

@wohali

This comment has been minimized.

Copy link
Member

wohali commented Dec 20, 2018

@jaydoane OK, I agree with you that this is an improvement.

We can merge this once the tests pass, but will you take the step to improve the documentation at the same time? I'd like to do both at once, so if you don't have time before the holidays start, we can leave this until 2019.

@jaydoane

This comment has been minimized.

Copy link
Contributor Author

jaydoane commented Dec 23, 2018

@wohali I would be down with working on the docs, but I'm not sure which route to take. Do we really want to document both scenarios, when it seems like (especially after this patch) there's no obvious upside to using one over the other?

It also occurs to me that, since one can now set up a cluster by making multiple requests to a single coordinator node, the whole process could in principle be made much simpler by changing the setup code to support a single request to the coordinator with all the information necessary in one data structure. For example, we could in principle deprecate the entire series of error-prone and confusing requests with something like this:

POST /_cluster_setup
{
    "nodes": [
         {
            "url": "http://user2:pass2@127.0.0.1:25984",
            "name": "node2"
        },
        {
            "url": "http://user3:pass3@127.0.0.1:35984",
            "name": "node3"
        }
    ],
    "admins": [
        "adm:pass"
    ]
}

where "nodes" represents an optional list of additional nodes in the cluster, and "admins" is an optional (?) list of all admins to be configured. A "node" object could also include an optional "bind_address", and perhaps other useful fields. This would also allow for (possibly randomly generated) individual creds for each node prior to forming a cluster. The POST request itself implicitly requires the credentials "user1:pass1" to authenticate. I think I'll play around with prototyping this unless you think it's a terrible idea for some reason.

@wohali

This comment has been minimized.

Copy link
Member

wohali commented Jan 18, 2019

@jaydoane OK, I give up, let's not let perfect be the enemy of the good.

Can you please merge this, then get a docs PR up? Thanks!

@jaydoane jaydoane force-pushed the cloudant:configurable-auth-salt branch from d33b792 to 6ed0b85 Jan 22, 2019

@jaydoane jaydoane requested a review from rnewson Jan 22, 2019

@jaydoane

This comment has been minimized.

Copy link
Contributor Author

jaydoane commented Jan 22, 2019

@rnewson I'm curious whether you think having the same salt configured on all cluster nodes is somehow a security problem? Since we already do something similar with [couch_httpd_auth] secret it seems ok to me, but I'd love another opinion.

@wohali

This comment has been minimized.

Copy link
Member

wohali commented Jan 22, 2019

@jaydoane I'm not @rnewson but I can tell you that if the same salt on all nodes isn't set correctly, Fauxton behind a load balancer breaks.

@jaydoane

This comment has been minimized.

Copy link
Contributor Author

jaydoane commented Jan 22, 2019

@wohali I was assuming I needed either an explicit +1 or a review approval to merge. Please LMK if that's not correct. Also, happy to merge without Bob's review.

@wohali

This comment has been minimized.

Copy link
Member

wohali commented Jan 22, 2019

You're right, you need a formal +1, and I'm selecting out of the ability to review this patch because my Erlang-fu is presently too weak. I was only commenting on the salt bit :)

@@ -41,18 +41,24 @@ hash_admin_password(ClearPassword) when is_binary(ClearPassword) ->
hash_admin_password(Scheme, ClearPassword).

hash_admin_password("simple", ClearPassword) -> % deprecated
Salt = couch_uuids:random(),
Salt = salt(),

This comment has been minimized.

@iilyak

iilyak Jan 25, 2019

Contributor

This reduces our security. For PBKDF the salt should be (see https://www.owasp.org/index.php/Password_Storage_Cheat_Sheet):

  • unique for each stored credential
  • generated by cryptographically-strong PRNG

This comment has been minimized.

@jaydoane

jaydoane Jan 25, 2019

Author Contributor

@iilyak thanks for pointing out those requirements for the salt. However, those requirements me wonder about the security implications of our current practice of keeping [couch_httpd_auth] secret consistent among nodes, since that value is used in a similar way to generate cookie values.

Nevertheless, it's pretty clear that this approach will not fly. I think the general approach we would need to take is to obtain the hash prior to setting the admin creds, and then set (admin-username, password-hash), which won't be changed under the covers. This is sort of the approach taken here although I believe there's a race because that code doesn't explicitly wait until passwords are hashed, but just gets lucky that the hashing happens quickly enough.

This comment has been minimized.

@iilyak

iilyak Feb 6, 2019

Contributor

@jaydoane Please update testing recommendation.

@iilyak
Copy link
Contributor

iilyak left a comment

We cannot replace random salt with static value from config.

@jaydoane

This comment has been minimized.

Copy link
Contributor Author

jaydoane commented Feb 5, 2019

In #1781 @wohali writes:

if you use the cluster setup wizard to create admin users on new nodes in the cluster, the created admin user will not use an identical salt on every node

This code, even without the changes to salt in couch_passwords.erl, might still be useful in that it would sync the admin passwords in the "direct" setup scenario. And speaking of which, that is presumably the scenario fauxton is using, since if it were using the (documented)[http://docs.couchdb.org/en/latest/setup/cluster.html#the-cluster-setup-api] "remote" scenario, those admin hashes should be the same on all nodes. I looked through the fauxton source, but am not yet proficient enough with that code to determine which of the two setup scenarios it is actually using. Identifying these files as potentially relevant is about as far as I've delved:

@wohali

This comment has been minimized.

Copy link
Member

wohali commented Feb 5, 2019

@iilyak I don't see the security problem with using the same hash on every node. Fixing that hash to an always-the-same value, yes, I see that as a problem.

@iilyak

This comment has been minimized.

Copy link
Contributor

iilyak commented Feb 5, 2019

@wohali : @iilyak I don't see the security problem with using the same hash on every node. Fixing that hash to an always-the-same value, yes, I see that as a problem.

As you've said the same hash for every node is ok. However this PR is implemented in such a way that we use the same salt for every independent credential. Which needs to be fixed.

@jaydoane jaydoane force-pushed the cloudant:configurable-auth-salt branch from 6ed0b85 to 01310a5 Feb 6, 2019

@jaydoane

This comment has been minimized.

Copy link
Contributor Author

jaydoane commented Feb 6, 2019

I removed all salt-related code and kept the admin syncing code.

I tested by changing this line to remove the hashify function like so:

    return contents + "\n%s = %s" % (user, pswd)

and then confirmed that all admin passwords are hashed the same after setup:

(node1@127.0.0.1)1> rpc:multicall(config, get, ["admins"]).
{[[{"adm",
    "-pbkdf2-454cf868445b52a1d7a527c20c1622925c8fab3b,561a3a73c5ae6276a2cb0fc47434975d,10"}],
  [{"adm",
    "-pbkdf2-454cf868445b52a1d7a527c20c1622925c8fab3b,561a3a73c5ae6276a2cb0fc47434975d,10"}],
  [{"adm",
    "-pbkdf2-454cf868445b52a1d7a527c20c1622925c8fab3b,561a3a73c5ae6276a2cb0fc47434975d,10"}]],
 []}

Presumably, we could also remove the hashify function and from pbkdf2 import pbkdf2_hex line from setup with this change.

@jaydoane jaydoane removed the request for review from rnewson Feb 6, 2019



sync_admin(User, Pass) ->
{Results, Errors} = rpc:multicall(nodes(), config, set,

This comment has been minimized.

@iilyak

iilyak Feb 6, 2019

Contributor

Should we use nodes() -- [node()]?

This comment has been minimized.

@jaydoane

jaydoane Feb 8, 2019

Author Contributor

The docs say "Returns a list of all visible nodes in the system, except the local node. Same as nodes(visible)." so nodes() -- [node()] == nodes().

This comment has been minimized.

@iilyak

iilyak Feb 12, 2019

Contributor

While testing this PR I noticed a problem. One of the nodes was not getting correct hash for adm user.
I added debug statement which logs the list of nodes returned by nodes(). Turns out that sometimes it doesn't contain all nodes of the cluster. Maybe we should use mem3:nodes() instead?

This comment has been minimized.

@jaydoane

jaydoane Feb 12, 2019

Author Contributor

@iilyak I factored a new function other_nodes/0 which uses mem3:nodes/0. I subsequently re-ran the manual dev/run test half a dozen times, and it's working for me.

@jaydoane jaydoane force-pushed the cloudant:configurable-auth-salt branch from 01310a5 to 8991678 Feb 8, 2019

@jaydoane

This comment has been minimized.

Copy link
Contributor Author

jaydoane commented Feb 8, 2019

Testing recommendations updated

@wohali wohali referenced this pull request Feb 8, 2019

Merged

2.3.1 Release Proposal #1908

@jaydoane jaydoane referenced this pull request Feb 9, 2019

Closed

Request for release 2.3.1 #1180

Sync admin password hashes at cluster setup finish
This ensures that admin password hashes are the same on all nodes when
passwords are set directly on each node rather than through the
coordinator node.

@jaydoane jaydoane force-pushed the cloudant:configurable-auth-salt branch from 8991678 to a6c1988 Feb 12, 2019

@iilyak

iilyak approved these changes Feb 12, 2019

Copy link
Contributor

iilyak left a comment

+1. Thank you @jaydoane for your persistence.

@jaydoane jaydoane merged commit e699fe6 into apache:master Feb 12, 2019

1 check passed

continuous-integration/travis-ci/pr The Travis CI build passed
Details

@jaydoane jaydoane deleted the cloudant:configurable-auth-salt branch Feb 12, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.