Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multifundchannel #3763

Merged
merged 6 commits into from
Sep 9, 2020
Merged

Conversation

ZmnSCPxj
Copy link
Collaborator

@ZmnSCPxj ZmnSCPxj commented Jun 10, 2020

Fixes: #1936

@ZmnSCPxj ZmnSCPxj requested a review from cdecker as a code owner June 10, 2020 05:14
@ZmnSCPxj ZmnSCPxj force-pushed the multifundchannel branch 2 times, most recently from 917e54c to 96d815e Compare June 10, 2020 22:10
@ZmnSCPxj
Copy link
Collaborator Author

I am having problems with NodeFactory.get_node with DEVELOPER=0 and disconnect=..., but the test passes in DEVELOPER=1, so I disabled it in DEVELOPER == 0.

@ZmnSCPxj
Copy link
Collaborator Author

Added missing changelog entries.

@ZmnSCPxj
Copy link
Collaborator Author

Snuck in a revamp of the plugins/Makefile.

Why no reviews yet?

@ZmnSCPxj
Copy link
Collaborator Author

Corrected passthrough of utxos.

Copy link
Member

@cdecker cdecker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very nice consolidation to have fundchannel as special case for multifundchannel. I am a bit weary of the use of txprepare with dummy outputs however: we should under no circumstance handle a transaction which could potentially be passed to hsmd for signature. My fear is that we could end up not replacing one of the dummy outputs and end up sending it out.

The handling of all should definitely be done based on the listfunds output, and doesn't warrant the use of txprepare. The reservation is a better reason, but still dangerous. If we want to keep the reservation I think we need to add a dummy input that makes the dummy tx uncommittable, and we must create the real transaction from scratch instead of amending the dummy tx in order to avoid leaving stale information.

The spark construction is nice, maybe we can improve the ergonomics a bit further by having a wait_group struct wrap the list of sparks, i.e., make struct plugin_data_wait_all_sparks public and give it a name that is more recognizable.

plugins/libplugin.c Outdated Show resolved Hide resolved
plugins/libplugin.c Outdated Show resolved Hide resolved
plugins/libplugin.c Outdated Show resolved Hide resolved
plugins/libplugin.c Outdated Show resolved Hide resolved
plugins/libplugin.c Outdated Show resolved Hide resolved
plugins/multifundchannel.c Outdated Show resolved Hide resolved
plugins/multifundchannel.c Outdated Show resolved Hide resolved
Comment on lines 735 to 921
txid_tok->end - txid_tok->start,
buf + txid_tok->start);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

json_tok_full and json_tok_full_len are helper functions for this kind of computation.

Copy link
Collaborator Author

@ZmnSCPxj ZmnSCPxj Jun 16, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or json_strdup with a tmpctx, which would let us just use %s. But we consistently use this style in e.g. plugins/pay.c, lightningd/pay.c, and elsewhere, when reporting missing fields or incorrect types from JSON.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, json_strdup is not equivalent since it actually does an allocation and copies. We use a mix of end - start + buf + start and the accessors because we are slowly migrating from the former to the latter, so let's not introduce new instances of the ones we are migrating away from :-)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The advantage of json_strdup is that it is DRY and shorter.

 print("%.*s", json_tok_full_len(tok), json_tok_full(buf, tok));
 printf("%s", json_strdup(tmpctx, buf, tok));

doc/lightning-multifundchannel.7.md Outdated Show resolved Hide resolved
doc/lightning-multifundchannel.7.md Outdated Show resolved Hide resolved
@ZmnSCPxj
Copy link
Collaborator Author

The handling of all should definitely be done based on the listfunds output, and doesn't warrant the use of txprepare. The reservation is a better reason, but still dangerous. If we want to keep the reservation I think we need to add a dummy input that makes the dummy tx uncommittable, and we must create the real transaction from scratch instead of amending the dummy tx in order to avoid leaving stale information.

Seems a bit more complicated change; would it be okay to put that in a separate PR?

@ZmnSCPxj
Copy link
Collaborator Author

Force-pushed requested changes, except for the transaction preparation changes which would require some more thought.

@ZmnSCPxj
Copy link
Collaborator Author

The handling of all should definitely be done based on the listfunds output, and doesn't warrant the use of txprepare. The reservation is a better reason, but still dangerous.

Atomicity is the main advantage of having a single command that selects inputs, computes all, and reserves the inputs and generates the transaction. If the node has a lot of other RPC commands being sent to it, then by the time you compute all based on listfunds the inputs could have been reserved for another parallel command invocation. So I think we should keep all computations of all in txprepare, because of the atomicity this guarantees you.

See #3415 (comment). We could have dummy transactions have a far future nLockTime, then have a txmodify which discards the dummy transaction and re-prepare the real transaction to be broadcasted (as a single atomic step).

@ZmnSCPxj
Copy link
Collaborator Author

I got some strange errors, reports from valgrind about errors arising from PSBT code that I never touched. Rebased to before the recent PSBT changes.

@cdecker
Copy link
Member

cdecker commented Jun 16, 2020

Yes, it seems that libwally's psbt code has some minor valgrind issues. @niftynei has filed a number of PRs with them to address them.

Copy link
Collaborator

@darosior darosior left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry to bring this up again, but re (in d858a17) :

Changelog-Added: Internal: Our C plugin library now has support for parallelism within an exposed command.

Our library does already support parallel requests, so I wonder why prefering this whole new spark system over what we already do, e.g. for starting plugins.
As an example the following commit, multiconnect plugin, could be implemented with the same logic and without the spark module :

diff --git a/plugins/multiconnect.c b/plugins/multiconnect.c
index 5f5002ad8..735bcab47 100644
--- a/plugins/multiconnect.c
+++ b/plugins/multiconnect.c
@@ -6,7 +6,6 @@
 #include <common/json_tok.h>
 #include <common/jsonrpc_errors.h>
 #include <common/utils.h>
-#include <plugins/libplugin_spark.h>
 
 struct connect_single_spark {
        /* The id for this connection attempt.  */
@@ -15,7 +14,8 @@ struct connect_single_spark {
        const char *features;
 
        /* The completion of this spark.  */
-       struct plugin_spark_completion *completion;
+       bool complete;
+       struct connect_multi *cm;
 };
 struct connect_multi {
        /* The individual connect_single commands.  */
@@ -45,6 +45,17 @@ multiconnect_done(struct command *cmd,
        return command_finished(cmd, out);
 }
 
+static bool all_sparks_complete(struct connect_multi *cm)
+{
+       size_t i;
+
+       for (i = 0; i < tal_count(cm->subcommands); i++)
+               if (!cm->subcommands[i].complete)
+                       return false;
+
+       return true;
+}
+
 static struct command_result *
 connect_single_spark_done(struct command *cmd,
                          const char *buf,
@@ -71,23 +82,24 @@ connect_single_spark_done(struct command *cmd,
                plugin_err(cmd->plugin, "'connect' missing 'features' field");
        css->features = json_strdup(cmd, buf, featurestok);
 
-       return plugin_spark_complete(cmd, css->completion);
+       css->complete = true;
+       if (all_sparks_complete(css->cm))
+               return multiconnect_done(cmd, css->cm);
+
+       return command_still_pending(cmd);
 }
-static struct command_result *
+static void
 connect_single_spark_start(struct command *cmd,
-                          struct plugin_spark_completion *completion,
                           struct connect_single_spark *css)
 {
        struct out_req *req;
 
-       css->completion = completion;
-
        req = jsonrpc_request_start(cmd->plugin, cmd, "connect",
                                    &connect_single_spark_done,
                                    &forward_error,
                                    css);
        json_add_string(req->js, "id", css->id);
-       return send_outreq(cmd->plugin, req);
+       send_outreq(cmd->plugin, req);
 }
 
 static struct command_result *multiconnect(struct command *cmd,
@@ -97,25 +109,19 @@ static struct command_result *multiconnect(struct command *cmd,
        size_t i;
        const jsmntok_t *t;
        struct connect_multi *cm;
-       struct plugin_spark_group *sparks;
 
        cm = tal(cmd, struct connect_multi);
        cm->subcommands = tal_arr(cm, struct connect_single_spark, ids->size);
 
-       sparks = plugin_make_spark_group(cm, cmd);
-
        /* We know at this point that ids is an array of strings.  */
        json_for_each_arr(i, t, ids) {
-               struct plugin_spark *spark;
                cm->subcommands[i].id = json_strdup(cm, buf, t);
-               spark = plugin_start_spark(cmd,
-                                          &connect_single_spark_start,
-                                          &cm->subcommands[i]);
-               plugin_spark_group_add(sparks, spark);
+               cm->subcommands[i].complete = false;
+               cm->subcommands[i].cm = cm;
+               connect_single_spark_start(cmd, &cm->subcommands[i]);
        }
 
-       return plugin_wait_spark_group(cmd, sparks,
-                                      &multiconnect_done, cm);
+       return command_still_pending(cmd);
 }
 
 static struct command_result *json_connect(struct command *cmd,

plugins/multiconnect.h Outdated Show resolved Hide resolved
NULL);

json_for_each_arr(i, t, idtok) {
if (t->type != JSMN_STRING)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe also check if the string length is >= 32 so that we exit early in case of a mis-copy/paste

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The string can have an @host:port appended to the end. We delegate parsing that to connect. We could add the parsing for @host:port here but that would violate DRY unless we refactor the parsing code over in lightningd/connect_control.c and put it in a new common module.

@ZmnSCPxj
Copy link
Collaborator Author

Our library does already support parallel requests, so I wonder why prefering this whole new spark system over what we already do, e.g. for starting plugins.

Code reuse. Do I want to keep rewriting the same code to synchronize multiple parallel command executions over and over?

Note also that sparks specifically allow any spark to complete the command, which stops all other suspended sparks (and since libplugin-using plugins are really single-threaded, the only way for one spark to be running and able to complete a command is if every other spark is suspended waiting on a command). I added some extra code in libplugin.c to support libplugin_spark.h specifically so I can use &forward_error in a parallel execution. Without that support code you run the risk of a use-after-free when multiple parallel command executions call forward_error.

@ZmnSCPxj
Copy link
Collaborator Author

Bleah, the new plugins were not being installed. Snuck in more improvements to the Makefile installation targets.

Copy link
Contributor

@rustyrussell rustyrussell left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Our library does already support parallel requests, so I wonder why prefering this whole new spark system over what we already do, e.g. for starting plugins.

Code reuse. Do I want to keep rewriting the same code to synchronize multiple parallel command executions over and over?

But 20 lines of code, which may need to be retyped for each user, wins over 500 lines of code. I can read that at a glance.

Note also that sparks specifically allow any spark to complete the command, which stops all other suspended sparks (and since libplugin-using plugins are really single-threaded, the only way for one spark to be running and able to complete a command is if every other spark is suspended waiting on a command). I added some extra code in libplugin.c to support libplugin_spark.h specifically so I can use &forward_error in a parallel execution. Without that support code you run the risk of a use-after-free when multiple parallel command executions call forward_error.

Except he allocated each part off cmd, so when one finishes the command, the rest are freed. The core code still needs to handle the case where a response comes back and the listener has gone, which it doesn't, but that's a simple fix:

	out = uintmap_get(&plugin->out_reqs, id);
	if (!out)
		plugin_err(plugin, "JSON reply with unknown id '%.*s' (%"PRIu64")",
			   json_tok_full_len(toks),
			   json_tok_full(plugin->rpc_buffer, toks), id);

Running multiple parallel requests is supported just fine by libplugin.

The only real way to tell what's better is for me to reimplement it so we can compare the results...

plugins/Makefile Outdated Show resolved Hide resolved
@rustyrussell
Copy link
Contributor

Actually, multiconnect is wrong, either implementation.

If one connect fails, we it seems strange to forward the error (with no information on which it was!) and ignore the successful ones. We probably need to annotate exactly which ones failed though.

@rustyrussell
Copy link
Contributor

OK, so I've refactored multiconnect, using @darosior 's patch as a starting point.

It's much clearer than the spark version, but I also fixed other issues:

  1. We use the same flow for multi and single connect.
  2. We keep addrhint explicitly, so we can display it on return.
  3. A single error doesn't cause the entire call to fail.
  4. The JSON output is more useful.
  5. We do more of the parameter parsing in the param callback directly. This will help when plugins finally get check support.
  6. We don't use a gratuitous array for connect_command.
  7. The init function can be NULL, so remove it.

The resulting file is only 40 lines longer, most in improved error handling:

diff --git a/plugins/multiconnect.c b/plugins/multiconnect.c
index 5f5002ad8..7fa170578 100644
--- a/plugins/multiconnect.c
+++ b/plugins/multiconnect.c
@@ -1,25 +1,32 @@
 #include "multiconnect.h"
 #include <ccan/array_size/array_size.h>
 #include <ccan/compiler/compiler.h>
+#include <ccan/tal/str/str.h>
 #include <common/json.h>
 #include <common/json_stream.h>
 #include <common/json_tok.h>
 #include <common/jsonrpc_errors.h>
 #include <common/utils.h>
-#include <plugins/libplugin_spark.h>
 
-struct connect_single_spark {
-	/* The id for this connection attempt.  */
-	const char *id;
+struct connect_single {
+	/* The id for this connection attempt (may include @... initially). */
+	char *id;
+	/* Optional address hint. */
+	char *addrhint;
+
+	/* Exactly one of these is set on completion: */
 	/* The features for this connection attempt.  */
 	const char *features;
+	/* If it failed, this is the raw JSON error. */
+	const char *error_json;
 
-	/* The completion of this spark.  */
-	struct plugin_spark_completion *completion;
+	/* The container for this */
+	struct connect_multi *cm;
 };
+
 struct connect_multi {
 	/* The individual connect_single commands.  */
-	struct connect_single_spark *subcommands;
+	struct connect_single **subcommands;
 };
 
 static struct command_result *
@@ -31,25 +38,50 @@ multiconnect_done(struct command *cmd,
 
 	out = jsonrpc_stream_success(cmd);
 
-	json_array_start(out, "id");
-	for (i = 0; i < tal_count(cm->subcommands); ++i)
-		json_add_string(out, NULL, cm->subcommands[i].id);
-	json_array_end(out);
-
-	json_array_start(out, "features");
+	json_array_start(out, "peers");
 	for (i = 0; i < tal_count(cm->subcommands); ++i) {
-		json_add_string(out, NULL, cm->subcommands[i].features);
+		json_object_start(out, NULL);
+		json_add_string(out, "id", cm->subcommands[i]->id);
+		if (cm->subcommands[i]->addrhint)
+			json_add_string(out, "addresshint",
+					cm->subcommands[i]->addrhint);
+		if (cm->subcommands[i]->error_json) {
+			json_add_bool(out, "connected", false);
+			json_add_jsonstr(out, "error", cm->subcommands[i]->error_json);
+		} else {
+			json_add_bool(out, "connected", true);
+			json_add_string(out, "features", cm->subcommands[i]->features);
+		}
+		json_object_end(out);
 	}
 	json_array_end(out);
 
 	return command_finished(cmd, out);
 }
 
+static bool all_complete(struct connect_multi *cm)
+{
+       for (size_t i = 0; i < tal_count(cm->subcommands); i++)
+               if (!cm->subcommands[i]->error_json && !cm->subcommands[i]->features)
+                       return false;
+
+       return true;
+}
+
+static struct command_result *single_completed(struct command *cmd,
+					       struct connect_single *connect)
+{
+	if (all_complete(connect->cm))
+		return multiconnect_done(cmd, connect->cm);
+
+	return command_still_pending(cmd);
+}
+
 static struct command_result *
-connect_single_spark_done(struct command *cmd,
-			  const char *buf,
-			  const jsmntok_t *result,
-			  struct connect_single_spark *css)
+connect_single_done(struct command *cmd,
+		    const char *buf,
+		    const jsmntok_t *result,
+		    struct connect_single *connect)
 {
 	const jsmntok_t *idtok;
 	const jsmntok_t *featurestok;
@@ -62,157 +94,161 @@ connect_single_spark_done(struct command *cmd,
 		plugin_err(cmd->plugin, "'connect' missing 'id' field");
 
 	/* Replace whatever string was originally in here.  */
-	tal_free(css->id);
-	css->id = json_strdup(cmd, buf, idtok);
+	tal_free(connect->id);
+	connect->id = json_strdup(connect, buf, idtok);
 
 	/* Retrieve the `features` from the base command.  */
 	featurestok = json_get_member(buf, result, "features");
 	if (!featurestok)
 		plugin_err(cmd->plugin, "'connect' missing 'features' field");
-	css->features = json_strdup(cmd, buf, featurestok);
+	connect->features = json_strdup(connect, buf, featurestok);
 
-	return plugin_spark_complete(cmd, css->completion);
+	return single_completed(cmd, connect);
 }
-static struct command_result *
-connect_single_spark_start(struct command *cmd,
-			   struct plugin_spark_completion *completion,
-			   struct connect_single_spark *css)
+
+static struct command_result *connect_single_error(struct command *cmd,
+						   const char *buf,
+						   const jsmntok_t *error,
+						   struct connect_single *connect)
 {
-	struct out_req *req;
+	connect->error_json = json_strdup(connect, buf, error);
+	return single_completed(cmd, connect);
+}
 
-	css->completion = completion;
+static void
+connect_single_start(struct command *cmd,
+		     struct connect_single *connect)
+{
+	struct out_req *req;
 
 	req = jsonrpc_request_start(cmd->plugin, cmd, "connect",
-				    &connect_single_spark_done,
-				    &forward_error,
-				    css);
-	json_add_string(req->js, "id", css->id);
-	return send_outreq(cmd->plugin, req);
+				    &connect_single_done,
+				    &connect_single_error,
+				    connect);
+
+	json_add_string(req->js, "id", tal_fmt(tmpctx, "%s%s",
+					       connect->id,
+					       connect->addrhint
+					       ? connect->addrhint : ""));
+	send_outreq(cmd->plugin, req);
 }
 
-static struct command_result *multiconnect(struct command *cmd,
-					   const char *buf,
-					   const jsmntok_t *ids)
+static struct command_result *add_single(struct command *cmd,
+					 struct connect_multi *cm,
+					 const char *buffer, const jsmntok_t *idtok)
 {
-	size_t i;
-	const jsmntok_t *t;
-	struct connect_multi *cm;
-	struct plugin_spark_group *sparks;
+	struct connect_single *connect = tal(cm, struct connect_single);
+	char *atptr;
 
-	cm = tal(cmd, struct connect_multi);
-	cm->subcommands = tal_arr(cm, struct connect_single_spark, ids->size);
+	if (idtok->type != JSMN_STRING)
+		return command_done_err(cmd, JSONRPC2_INVALID_PARAMS,
+					tal_fmt(tmpctx,
+						"id: expected string, not '%.*s'",
+						idtok->end - idtok->start,
+						buffer + idtok->start),
+					NULL);
 
-	sparks = plugin_make_spark_group(cm, cmd);
+	connect->features = NULL;
+	connect->error_json = NULL;
+	connect->cm = cm;
+	connect->id = json_strdup(connect, buffer, idtok);
+	atptr = strchr(connect->id, '@');
+	if (atptr) {
+		connect->addrhint = tal_strdup(connect, atptr);
+		*atptr = '\0';
+	} else
+		connect->addrhint = NULL;
+	tal_arr_expand(&cm->subcommands, connect);
+	return NULL;
+}
 
-	/* We know at this point that ids is an array of strings.  */
-	json_for_each_arr(i, t, ids) {
-		struct plugin_spark *spark;
-		cm->subcommands[i].id = json_strdup(cm, buf, t);
-		spark = plugin_start_spark(cmd,
-					   &connect_single_spark_start,
-					   &cm->subcommands[i]);
-		plugin_spark_group_add(sparks, spark);
-	}
+static struct command_result *param_ids(struct command *cmd, const char *name,
+					const char *buffer, const jsmntok_t *tok,
+					struct connect_multi *cm)
+{
+	struct command_result *err;
+
+	/* Is id a string?  Treat it as a single-entry array */
+	if (tok->type == JSMN_ARRAY) {
+		size_t i;
+		const jsmntok_t *t;
+
+		/* Empty array is probably a bug. */
+		if (tok->size == 0)
+			return command_done_err(cmd, JSONRPC2_INVALID_PARAMS,
+						"Need at least one id", NULL);
 
-	return plugin_wait_spark_group(cmd, sparks,
-				       &multiconnect_done, cm);
+		json_for_each_arr(i, t, tok) {
+			err = add_single(cmd, cm, buffer, t);
+			if (err)
+				break;
+		}
+	} else
+		err = add_single(cmd, cm, buffer, tok);
+
+	return err;
 }
 
 static struct command_result *json_connect(struct command *cmd,
 					   const char *buf,
 					   const jsmntok_t *params)
 {
-	const jsmntok_t *idtok;
+	struct connect_multi *cm = tal(cmd, struct connect_multi);
 	const char *host;
 	u32 *port;
-	struct out_req *req;
 
+	cm->subcommands = tal_arr(cm, struct connect_single *, 0);
 	if (!param(cmd, buf, params,
-		   p_req("id", param_tok, &idtok),
+		   p_req("id", param_ids, cm),
 		   p_opt("host", param_string, &host),
 		   p_opt("port", param_number, &port),
 		   NULL))
 		return command_param_failed();
 
-	/* Is id an array?  If so use multi-connect.  */
-	if (idtok->type == JSMN_ARRAY) {
-		size_t i;
-		const jsmntok_t *t;
-
-		if (host)
+	/* For simplicity, we support host and port iff one entry. */
+	if (host || port) {
+		struct connect_single *s = cm->subcommands[0];
+		plugin_log(cmd->plugin, LOG_UNUSUAL, "host = %s, port = %p (%u)",
+			   host, port, port ? *port : 0);
+		if (tal_count(cm->subcommands) != 1)
 			return command_done_err(cmd, JSONRPC2_INVALID_PARAMS,
 						"Cannot specify parameter "
-						"'host' when 'id' parameter "
+						"'host' or 'port' when 'id' parameter "
 						"is an array.",
 						NULL);
-		if (port)
+		if (s->addrhint)
 			return command_done_err(cmd, JSONRPC2_INVALID_PARAMS,
 						"Cannot specify parameter "
-						"'port' when 'id' parameter "
-						"is an array.",
+						"'host' or 'port' when 'id' contains "
+						" @ hint already",
 						NULL);
+		s->addrhint = tal_strdup(s, "@");
+		if (host)
+			tal_append_fmt(&s->addrhint, "%s", host);
+		if (port)
+			tal_append_fmt(&s->addrhint, ":%u", *port);
+	}
 
-		if (idtok->size == 0)
-			return command_done_err(cmd, JSONRPC2_INVALID_PARAMS,
-						"Empty 'id' array: "
-						"nothing to connect.",
-						NULL);
-
-		json_for_each_arr(i, t, idtok) {
-			if (t->type != JSMN_STRING)
-				return command_done_err(cmd,
-							JSONRPC2_INVALID_PARAMS,
-							"All items in 'id' "
-							"array "
-							"must be strings.",
-							NULL);
-		}
-
-		return multiconnect(cmd, buf, idtok);
-	} else if (idtok->type != JSMN_STRING)
-		return command_done_err(cmd, JSONRPC2_INVALID_PARAMS,
-					"'id' must be either a string or an "
-					"array of strings.",
-					NULL);
+	for (size_t i = 0; i < tal_count(cm->subcommands); i++)
+		connect_single_start(cmd, cm->subcommands[i]);
 
-	/* If ID is string, just forward to connect_single command.  */
-	req = jsonrpc_request_start(cmd->plugin, cmd, "connect",
-				    &forward_result, &forward_error,
-				    NULL);
-	json_add_string(req->js, "id", json_strdup(tmpctx, buf, idtok));
-	if (host)
-		json_add_string(req->js, "host", host);
-	if (port)
-		json_add_u32(req->js, "port", *port);
-	return send_outreq(cmd->plugin, req);
+	return command_still_pending(cmd);
 }
 
-const struct plugin_command connect_commands[] = {
-	{
-		"multiconnect",
-		"network",
-		"Connect to {id} at {host} (which can end in ':port' if not default). "
-		"{id} can also be of the form id@host[:port].",
-		"Alternately, {id} can be an array of strings of the form id[@host[:port]] "
-		"to connect to multiple peers simultaneously "
-		"(and you should not specify {host} or {port}).",
-		json_connect
-	}
+static const struct plugin_command connect_command = {
+	"multiconnect",
+	"network",
+	"Connect to id, or [id, ...] to connect to multiple.",
+	"{id} can also be of the form id@host[:port].",
+	json_connect
 };
-const unsigned int num_connect_commands = ARRAY_SIZE(connect_commands);
-
-void connect_init(struct plugin *plugin UNUSED,
-		  const char *buf UNUSED, const jsmntok_t *config UNUSED)
-{
-	/* Do nothing.  */
-}
 
 int main(int argc, char **argv)
 {
 	setup_locale();
 	plugin_main(argv,
-		    &connect_init, PLUGIN_RESTARTABLE,
+		    NULL, PLUGIN_RESTARTABLE,
 		    NULL,
-		    connect_commands, num_connect_commands,
+		    &connect_command, 1,
 		    NULL, 0, NULL, 0, NULL);
 }

@rustyrussell
Copy link
Contributor

@ZmnSCPxj you need to be far more reluctant to create new infrastructure!

multifundchannel is a great idea, but it really wants an "best effort" option which will create as many of the channels as it can. And it doesn't need multiconnect: it can more easily do that itself. It doesn't need a new parallel dispatch infrastructure.

It needs to be the simplest possible code, so we can all review it and make sure it is as robust as possible. Not 600 new lines of new concept infrastructure followed by 1700 lines of a new plugin.

I will continue my review tomorrow...

@ZmnSCPxj
Copy link
Collaborator Author

Removed sparks.

@ZmnSCPxj ZmnSCPxj force-pushed the multifundchannel branch 2 times, most recently from c7d1277 to c5360af Compare August 19, 2020 02:21
@ZmnSCPxj ZmnSCPxj added this to the v0.9.1 milestone Aug 19, 2020
@ZmnSCPxj ZmnSCPxj force-pushed the multifundchannel branch 2 times, most recently from 810b615 to 2f3d016 Compare August 21, 2020 08:51
@ZmnSCPxj
Copy link
Collaborator Author

Rebased, also added missing checks for dust limit.

@ZmnSCPxj ZmnSCPxj force-pushed the multifundchannel branch 4 times, most recently from 79c12aa to 600dee4 Compare August 22, 2020 09:22
@ZmnSCPxj
Copy link
Collaborator Author

bumpity bumpity bump bump bumpity bumpity bump bump bumpity bumpity bump bump

Copy link
Collaborator

@niftynei niftynei left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bunch of little things mostly. The code organization in general's pretty non-standard, it'd be nicer to have things ordered by usage and maybe even use multiple code files to make it more readable, even if that does mean reading from bottom to top (which is how every other plugin file we have in here reads :P)

Also, for log statements it'd be much clearer if we use node_ids to identify destinations rather than the 'internal' index -- it makes it easier to identify errors if every log stmt has the node_id in it.

@@ -17,6 +17,16 @@ struct bitcoin_signature;
struct bitcoin_txid;
struct pubkey;

/** psbt_destroy - Destroy a PSBT that is not tal-allocated
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change doesn't belong in this commit.

Since psbt_destroy is only used in psbt.c, a better fix would be to make it static.

}
};
static
const unsigned int num_multifundchannel_commands =
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as this is is a const, should it not be all caps?

versions of C-Lightning) will disconnect in case
of funding channel failure.
And with a *multi* funding, it is more likely to
fail due to having to coordinate many more nodes.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i'd make it explicit as to why "disconnect on fail" is related to connecting to the nodes upfront. i'm assuming reason you'd mention this is because "if a previous attempt failed we can retry without needing to reconnect". it'd be useful to include this rationale here.

unsigned int i;

plugin_log(mfc->cmd->plugin, LOG_DBG,
"mfc %"PRIu64": multiconnect.", mfc->id);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for logs, it's nice to spell out what you're referring to

Suggested change
"mfc %"PRIu64": multiconnect.", mfc->id);
"multifund cmd#%"PRIu64" status: multiconnect.", mfc->id);


mfc->pending = tal_count(mfc->destinations);

for (i = 0; i < tal_count(mfc->destinations); ++i)
Copy link
Collaborator

@niftynei niftynei Aug 27, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
for (i = 0; i < tal_count(mfc->destinations); ++i)
for (i = 0; i < tal_count(mfc->destinations); i++)

what is this ++C? 😉

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

preincrement operator. I prefer this due to habit from very old, very lousy compilers where ++i is compiled at -O0 more efficiently than i++: the latter causes the compiler to create a useless move to a temporary register that is later reused.

- 300: The maximum allowed funding amount is exceeded.
- 301: There are not enough funds in the internal wallet (including fees) to create the transaction.
- 302: The output amount is too small, and would be considered dust.
- 303: Broadcasting of the funding transaction failed, the internal call to bitcoin-cli returned with an error.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- 303: Broadcasting of the funding transaction failed, the internal call to bitcoin-cli returned with an error.
- 303: Broadcasting the funding transaction failed, the internal call to the bitcoin backend returned with an error.

We may not be using bitcoin-cli?

{"id": '{}@localhost:{}'.format(l3.info['id'], l3.port),
"amount": 50000}]
with pytest.raises(RpcError):
l1.rpc.multifundchannel(destinations)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

missing tests for : dust + insufficient funds


# Only l3 should fail to have channels.
for node in [l1, l2, l4]:
node.daemon.wait_for_log(r'to CHANNELD_NORMAL')
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

assert that l3 indeed failed, and in the correct state/location?


/* Reset its state. */
dest->state = MULTIFUNDCHANNEL_START_NOT_YET;
/* We do not mess with `dest->index` so we have
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

using the node_id instead of index will fix this bit.

/* Filter out destinations. */
old_destinations = tal_steal(tmpctx, mfc->destinations);
mfc->destinations = NULL;
new_destinations = tal_arr(mfc, struct multifundchannel_destination,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not just use mfc->destinations?

@niftynei
Copy link
Collaborator

Found a crash case while trying it out on command line, pushed a patch to fix it up.

Copy link
Contributor

@rustyrussell rustyrussell left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I really like this. There are some cosmetic things that @niftynei and I both commented on, and I changed my mind on best_effort as a flag (sorry!).

I've marked it for this release, hope we can get it in!

&mfc_cleanup_done,
cleanup);
json_add_string(req->js, "psbt",
psbt_to_b64(tmpctx, psbt));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can now be json_add_psbt FWIW. Trivial tho

ok = ok && field;
ok = ok && parse_amount_sat(&mfc->excess_sat,
buf + field->start,
field->end - field->start);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This reads like an advertizement for goto. i.e.:

field = json_get_member(buf, result, "psbt");
if (!field)
    goto fail;

mfc->psbt = psbt_from_b64(mfc,
				       buf + field->start,
				       field->end - field->start)
if (!mfc->psbt)
    goto fail;

...
fail:
    plugin_err(mfc->cmd->plugin,
			   "Unexpected result from fundpsbt/utxopsbt: %.*s",
			   json_tok_full_len(result),
			   json_tok_full(buf, result));

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is still shorter:

field = ok ? son_get_member(buf, result, "psbt") : NULL;
ok = ok && field;
mfc->psbt = ok ? psbt_from_b54(mfc, buf + field->start, field->end - field->start) : NULL;
ok = ok && mfc->psbt;
...

if (!ok)
    plugin_err(...);

shrug.

size_t all_index = SIZE_MAX;
struct multifundchannel_destination *all_dest;

assert(mfc->has_all);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, but OTOH it's across a function boundary. I'm happy to see this as a reviewer.

plugins/multifundchannel.c Show resolved Hide resolved
mfc->excess_sat, change_fee);
assert(ok);
mfc->change_needed = true;
if (!mfc->change_scriptpubkey)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Huh, can this ever be set?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. A previous best_effort attempt may load this, and it seems wasteful to keep asking for more newaddr for each best-effort attempt, just reuse the previous. So, it is indeed possible if a previous best-effort succeeded.

struct command_result *result;

if (!param(cmd, buf, params,
p_req("destinations", param_tok, &json_destinations),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Always prefer to do inline parsing where possible (rahter than using param_tok and parsing later). That way the "check" command is far more effective (which always exits this function after param).

Of course, IIRC we haven't implemented check for plugins, but the theory applies!

@@ -862,7 +862,8 @@ def multifundchannel(self, destinations, feerate=None, minconf=None, utxos=None,
"destinations": destinations,
"feerate": feerate,
"minconf": minconf,
"utxos": utxos
"utxos": utxos,
"best_effort": best_effort
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I changed my mind. This parameter should be "minchannels", to indicate how many channels are acceptable. Default is all of them, otherwise it proceeds unless there fewer than "minchannels" channels remaining. 0 is invalid.

@rustyrussell
Copy link
Contributor

rebased onto master, put some fixmes on top. Needs merging in properly, but this is better for review.

@rustyrussell rustyrussell force-pushed the multifundchannel branch 3 times, most recently from 8ff9c70 to 5a2b219 Compare September 9, 2020 00:27
@rustyrussell
Copy link
Contributor

OK, I merged in the fixmes into one smooth progression. Will apply once Travis is happy.

ZmnSCPxj and others added 6 commits September 9, 2020 12:40
Changelog-Added: We now have `multifundchannel` as a builtin plugin command to fund multiple channels to different peers all in a single onchain transaction.


Header from folded patch 'fixup-use-json_add_psbt.patch':

fixup!


Header from folded patch 'use-goto-no-ok-chain.patch':

fixup!


Header from folded patch 'destinations-at-parse-time.patch':

fixup!


Header from folded patch 'multifundchannel__use_jsmntoks_to_pass_through_json_string,_not_strings.patch':

multifundchannel: use jsmntoks to pass through json string, not strings

Passing in "" for utxos would crash lightningd on the command-line
otherwise. Now returns an error.


Header from folded patch 'update_plugins-multifundchannel.c.patch':

Update plugins/multifundchannel.c

Co-authored-by: Darosior <darosior@protonmail.com>
[ changed from best_effort binary to minchannels counter -- RR]
@rustyrussell
Copy link
Contributor

Ack 0f24eb8

@rustyrussell rustyrussell merged commit e6a1819 into ElementsProject:master Sep 9, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Batch Funding Transaction for opening several channels (feature request)
5 participants