Intead of failing to get stats and then forgetting the socket completely, the tcpmon now unwraps the socket if needed to get at the gen_tcp socket center for stats. This seemed a better place for the unwrap than requiring callers of the 'monitor/2' function to unwrap the socket itself as it keeps that interface simpler, and keeps the true nature of the socket available to tcpmon if needed.
This commit provides a stop-gap measure to prevent a degenerate case when resizing a ring with larger datasets. When a ring is resized, it is possible that some partitions do not change ownership. If these partitions have data transferred to them before being allowed to perform their transfers they are regularly iterating over and ignoring a bunch of data from the new ring. In some cases, this can cause handoff to timeout because the sender fold is spending too much time skipping items not bound for the current reciever. This can lead to the stalling of the resize operation itself. A temporary workaround has been to raise the handoff timeout values to let these partitions complete their transfers, however, with this commit, these transfers are scheduled first -- taking advantage of the next list being treated as ordered by the vnode manager. Although this is by no means a solution in 100% of cases it should be a general improvement and should prevent the need for timeout adjustments, assuming when folding that the data is well-distributed accross the various preflists the vnode is responsible for a portion of.
Add riak_core_bucket_type:property_hash/1 function to provide a convenient way to calculate a hash of certain bucket type properties whose values could impact the treatment of buckets created with a particular bucket type.
Fix 2 failing tests * `refresh_my_ring_test` was failing due to an ETS table not being * setup properly. * `core_vnode_eqc` was failing in multiple places * shutdown had to be changed to kill so that processes didn't hang * during testing * an asyncwork response wasn't being counted for asynccrash calls * so that needed to be added back in (it was commented out) * With worker pools of 1, if an asyncrash call was followed by * another async call that calls work wouldn't be counted * because the message was dropped. I changed the possible pool * sizes from [0,1,10] to [0,4,10] to make this issue extremely * unlikely to happen. You'd need 4 worker crashes then another * async call before any of them were restarted. Since they all * have 100ms delays before crashing on purpose in mock_vnode * that seems highly unlikely.