Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow bind address of 127.0.0.1 in _cluster_setup for single node #593

Closed
wohali opened this issue Jun 10, 2017 · 9 comments
Closed

Allow bind address of 127.0.0.1 in _cluster_setup for single node #593

wohali opened this issue Jun 10, 2017 · 9 comments

Comments

@wohali
Copy link
Member

wohali commented Jun 10, 2017

Expected Behavior

A user should be able to configure a single node CouchDB installation to bind to 127.0.0.1 through the Fauxton Setup wizard or via the /_cluster_setup endpoint.

Current Behavior

Right now, if you use Fauxton to configure a single node CouchDB installation, you are forced to set a bind address other than 127.0.0.1. The same is true for the /_cluster_setup endpoint.

While this makes sense for configuring an actual cluster, it does not for a single node setup.

Possible Solutions

I see two ways to solve this problem:

  1. Change Fauxton's wizard for single node setup to directly access the /_node/couchdb@localhost/_config (substituting the correct node name as queried via /_membership) endpoint to alter the admin user, bind address and port as desired.

  2. Improve the /_cluster_setup endpoint to accept a new "action": "enable_single_node" that tolerates binding to 127.0.0.1. This would have to be paired with changing the validation function in Fauxton to accept that bind address when in the single node workflow.

I have a mild preference for 2, but 1 requires less work on the backend, and doesn't abuse an endpoint intended for setting up clusters to set up a single node as well.

Input from @garrensmith and @janl requested.

Steps to Reproduce (for bugs)

  1. Install a post-2.0 CouchDB.
  2. Use Fauxton to navigate to the setup wizard.
  3. Try to set up a single node that stays bound to 127.0.0.1. It will fail.
  4. Alternately, use the /_cluster_setup end point to do the same thing. It also fails.

Context

Real world users very often expect to have a single node CouchDB installation bound to localhost only.

@wohali
Copy link
Member Author

wohali commented Jun 10, 2017

If possible I'd like to patch this pre-2.1 if we can. The work is pretty straightforward, but we'll want to write tests for it ideally.

@wohali wohali modified the milestone: 2.1.0 Jul 3, 2017
@janl
Copy link
Member

janl commented Jul 5, 2017

The way forward is definitely

Improve the /_cluster_setup endpoint to accept a new "action": "enable_single_node" that tolerates binding to 127.0.0.1. This would have to be paired with changing the validation function in Fauxton to accept that bind address when in the single node workflow.

@wohali
Copy link
Member Author

wohali commented Jul 10, 2017

OK, so I've started implementation, and ran into a concern during the process, contingent upon the fact that the setup app currently determines if a cluster is "finished" setup by checking for the presence of _global_changes, _users and _replicator as databases on the clustered endpoint. This may not always be true, especially for _global_changes.

One scenario that worries me a lot: imagine an automated deployment tool (Chef/Puppet/Ansible/etc.) using GET /_cluster_setup to validate that nodes are functional. In a cluster where one of the 3 default databases has been removed (e.g. _global_changes) and the tool uses the output of GET /_cluster_setup to determine whether to (re-)finish setup, a never-ending create/delete loop might be created. Even if the action was manual, we know there are situations where a create/delete/create cycle can lead to race conditions and other badness.

I realize this is extending the scope of this a little bit, but I'm having to monkey with the very bit that checks for the presence of those 3 databases anyway, so it's not too much trouble.

So, the changes are:

  • Update to the finish_cluster endpoint to allow an additional parameter, ensure_dbs_exist=["db1", "db2", ...] where those clustered databases will be checked for and created if they do not exist. Matching the current behaviour, the default list is ["_users", "_replicator", "_global_changes"].
  • Update to the GET /_cluster_setup method to allow optionally specifying ensure_dbs_exist to match the finish_cluster endpoint.
  • The "action": "enable_single_node" will allow binding to 127.0.0.1. It will also set [cluster] n=1. The action will also take the ensure_dbs_exist parameter.
  • Fauxton will need to be updated for the single-node behaviour. I'm not planning to expose the ensure_dbs_exist parameter in that UI; if people want non-standard cluster setup behaviour they can use the endpoint directly.
  • (Optional) The ensure_dbs_exist list from either action finish_cluster or enable_single_node could be persisted in the ini file. This way it could be used later as the updated default for the GET /_cluster_setup db list.

Something for the future: people have been asking for the Setup tab in Fauxton to disappear once the cluster is set up. This could be powered by GET /_cluster_setup. Just a thought. 🤔

@wohali
Copy link
Member Author

wohali commented Jul 11, 2017

First PR is up. Once that's confirmed I'll go over to the Fauxton side of things.

@wohali
Copy link
Member Author

wohali commented Jul 11, 2017

Also TODO: Docs PR once the other PR lands.

@wohali
Copy link
Member Author

wohali commented Jul 16, 2017

Time for me to work on Fauxton and get this change exposed there. Once that's done, I'll update the documentation.

@wohali
Copy link
Member Author

wohali commented Jul 17, 2017

PR up. I had missed that the latest version of the couchdb-setup application now requires a node_count parameter, so I added that to Fauxton at the same time for the clustered setup option.

Once the PR lands, and Fauxton is tagged, and I bump rebar.config.script, I'll update the documentation for the updated /_cluster_setup endpoint and initial cluster setup in a final PR. Then we can close this out!

@wohali
Copy link
Member Author

wohali commented Jul 30, 2017

PR closed. Just have documentation now...

@wohali
Copy link
Member Author

wohali commented Jul 30, 2017

Final PR is up. This is the last thing we need for a 2.1 RC! 🥂

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants