Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Handle untouched new nested data types as no-ops #187

Merged
merged 4 commits into from
Jul 8, 2014
Merged

Conversation

macintux
Copy link
Contributor

@macintux macintux commented Jul 7, 2014

Addresses #186.

If new nested types are added to a map with default values, riakc_map:to_op/1 would throw an exception. Now it returns undefined if no other nested objects have been modified (or exist) or a list of operations ignoring the nested key that was added.

@seancribbs
Copy link
Contributor

👍 5064ef8

@seancribbs
Copy link
Contributor

I know I gave a +1 but a test would be nice too.

borshop added a commit that referenced this pull request Jul 7, 2014
Handle untouched new nested data types as no-ops

Reviewed-by: seancribbs
@macintux
Copy link
Contributor Author

macintux commented Jul 7, 2014

Added new test via @seancribbs. Verified test fails when executed with old code, passes with new.

We're not currently testing for mixed situations (some valid updates + some no-ops), looking at that now.

@seancribbs
Copy link
Contributor

👍 ed697ce

@macintux You can't retry if it never did the merge-a-roo

borshop added a commit that referenced this pull request Jul 8, 2014
Handle untouched new nested data types as no-ops

Reviewed-by: seancribbs
@macintux
Copy link
Contributor Author

macintux commented Jul 8, 2014

Don't know why Riak is failing to start for the test. Software, how does it work?

borshop added a commit that referenced this pull request Jul 8, 2014
Handle untouched new nested data types as no-ops

Reviewed-by: seancribbs
@seancribbs
Copy link
Contributor

@macintux ¯_(ツ)_/¯

@seancribbs
Copy link
Contributor

@macintux This is happening on the builder if I try to run the configured node:

$ riak config effective
escript: exception error: no case clause matching
                 {ok,[[{riak_core,[{cluster_mgr,{"127.0.0.1",10016}}]},
                       {riak_repl,[{data_root,"./data/riak_repl/"},
                                   {max_fssource_cluster,5},
                                   {max_fssource_node,1},
                                   {max_fssink_node,1},
                                   {fullsync_on_connect,true},
                                   {fullsync_interval,30},
                                   {rtq_max_bytes,104857600},
                                   {proxy_get,disabled},
                                   {rt_heartbeat_interval,15},
                                   {rt_heartbeat_timeout,15},
                                   {fullsync_use_background_manager,true}]}],
                      [{riak_kv,[{test,true}]}]]}
  in function  cuttlefish_escript:effective/1 (src/cuttlefish_escript.erl, line 113)

@seancribbs
Copy link
Contributor

Aha, note the contents of advanced.config, this is now running against riak_ee:

[
 {riak_core,
  [
   %% The cluster manager will listen for connections from remote
   %% clusters on this ip and port. Every node runs one cluster
   %% manager, but only the cluster manager running on the
   %% cluster_leader will service requests. This can change as nodes
   %% enter and leave the cluster.
   {cluster_mgr, {"127.0.0.1", 10016 } }
  ]},

 {riak_repl,
  [
   %% Path (relative or absolute) to the working directory for the
   %% replication process
   {data_root, "./data/riak_repl/"},

   %% The hard limit of fullsync workers that will be running on the
   %% source side of a cluster across all nodes on that cluster for a
   %% fullsync to a sink cluster. This means if one has configured
   %% fullsync for two different clusters, both with a
   %% max_fssource_cluster of 5, 10 fullsync workers can be in
   %% progress. Only affects nodes on the source cluster on which this
   %% parameter is defined via the configuration file or command line.
   {max_fssource_cluster, 5},

   %% Limits the number of fullsync workers that will be running on
   %% each individual node in a source cluster. This is a hard limit for
   %% all fullsyncs enabled; additional fullsync configurations will not
   %% increase the number of fullsync workers allowed to run on any node.
   %% Only affects nodes on the source cluster on which this parameter is
   %% defined via the configuration file or command line.
   {max_fssource_node, 1},

   %% Limits the number of fullsync workers allowed to run on each
   %% individual node in a sink cluster. This is a hard limit for all
   %% fullsync sources interacting with the sink cluster. Thus, multiple
   %% simultaneous source connections to the sink cluster will have to
   %% share the sink node's number of maximum connections. Only affects
   %% nodes on the sink cluster on which this parameter is defined via
   %% the configuration file or command line.
   {max_fssink_node, 1},

   %% Whether to initiate a fullsync on initial connection from the
   %% sink cluster.
   {fullsync_on_connect, true},

   %% A single integer value representing the duration to wait in
   %% minutes between fullsyncs, or a list of {clustername,
   %% time_in_minutes} pairs for each sink participating in fullsync
   %% replication.
   {fullsync_interval, 30},

   %% The maximum size the realtime replication queue can grow to
   %% before new objects are dropped. Defaults to 100MB. Dropped
   %% objects will need to be replication with a fullsync.
   {rtq_max_bytes, 104857600},

   %% Enable Riak CS proxy_get and block filter.
   {proxy_get, disabled},

   %% A heartbeat message is sent from the source to the sink every
   %% heartbeat_interval. Setting heartbeat_interval to undefined
   %% disables the realtime heartbeat. This feature is only available in
   %% Riak Enterprise 1.3.2+.
   {rt_heartbeat_interval, 15},

   %% If a heartbeat response is not received in rt_heartbeat_timeout
   %% seconds, then the source connection exits and will be
   %% re-established. This feature is only available in Riak
   %% Enterprise 1.3.2+.
   {rt_heartbeat_timeout, 15},

   %% By default, fullsync replication will try to coordinate with
   %% other riak subsystems that may be contending for the same
   %% resources. This will help to prevent system response degradation
   %% under times of heavy load from multiple background tasks. To
   %% disable background coordination, set this parameter to false.
   %% Enterprise 2.0+.
   {fullsync_use_background_manager, true}
  ]}
].
[{riak_kv,[{test,true}]}].

@macintux
Copy link
Contributor Author

macintux commented Jul 8, 2014

[{riak_kv,[{test,true}]}].

?!

@macintux
Copy link
Contributor Author

macintux commented Jul 8, 2014

Btw, failing to have one global list in advanced.config sounds awfully familiar...

@seancribbs
Copy link
Contributor

@macintux Yes, stung by basho/cuttlefish#158

@seancribbs
Copy link
Contributor

Suggestion: try changing that "append" (>>) to "overwrite" (>) in the buildbot/Makefile.

@macintux
Copy link
Contributor Author

macintux commented Jul 8, 2014

Ah, hadn't noticed that was our fault. Heh.

….config to add this entry instead of overwriting the EE configuration
@macintux
Copy link
Contributor Author

macintux commented Jul 8, 2014

Still hoping to get a mixed mode test written, and perhaps do a better job with the most recent change, but this could certainly be merged at any time if I don't get that done before I take off.

@seancribbs
Copy link
Contributor

👍 49db14d

@macintux Let's open an issue for the overwrite vs. merge thing. A short escript could do the trick.

borshop added a commit that referenced this pull request Jul 8, 2014
Handle untouched new nested data types as no-ops

Reviewed-by: seancribbs
@macintux
Copy link
Contributor Author

macintux commented Jul 8, 2014

On further review, etoomuchtodo. @borshop merge

@reiddraper
Copy link
Contributor

@borshop merge needs to be first

@borshop borshop merged commit 49db14d into master Jul 8, 2014
@seancribbs seancribbs deleted the jrd-issue-186 branch July 9, 2014 01:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants