Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stats conversion to JSON causes 500 error #555

Closed
lukebakken opened this issue Mar 6, 2014 · 3 comments
Closed

Stats conversion to JSON causes 500 error #555

lukebakken opened this issue Mar 6, 2014 · 3 comments

Comments

@lukebakken
Copy link
Contributor

lukebakken commented Mar 6, 2014

Ticket 7372. Details follow. This is similar to issue #404

Setup: 3 clusters, all with {inverse_connection, true}. Cluster 1 connected to 2 & 3 with v2 replication.

curl test-riak-1:8098/riak-repl/stats

<html><head><title>500 Internal Server Error</title></head><body><h1>Internal Server Error</h1>The server encountered an error while processing this request:<br><pre>{error, 
{error, 
{case_clause, 
[{<0.1493.299>, 
{message_queue_len,0}, 
{status, 
[{node,'REDACTED'}, 
{site,"test3"}, 
{strategy,riak_repl_keylist_server}, 
{fullsync_worker,<0.1495.299>}, 
{queue_pid,<0.1503.299>}, 
{dropped_count,0}, 
{queue_length,0}, 
{queue_byte_size,0}, 
{queue_max_size,104857600}, 
{queue_percentage,0}, 
{queue_pending,0}, 
{queue_max_pending,5}, 
{state,wait_for_partition}]}}, 
{<0.12107.297>, 
{message_queue_len,0}, 
{status, 
[{node,'riak@test-riak-1'}, 
{site,"site2"}, 
{strategy,riak_repl_keylist_server}, 
{fullsync_worker,<0.12109.297>}, 
{queue_pid,<0.12117.297>}, 
{dropped_count,0}, 
{queue_length,0}, 
{queue_byte_size,0}, 
{queue_max_size,104857600}, 
{queue_percentage,0}, 
{queue_pending,0}, 
{queue_max_pending,5}, 
{state,wait_for_partition}]}}]}, 
[{riak_repl_wm_stats,jsonify_stats,2, 
[{file,"src/riak_repl_wm_stats.erl"},{line,161}]}, 
{riak_repl_wm_stats,get_stats,0, 
[{file,"src/riak_repl_wm_stats.erl"},{line,108}]}, 
{riak_repl_wm_stats,produce_body,2, 
[{file,"src/riak_repl_wm_stats.erl"},{line,77}]}, 
{webmachine_resource,resource_call,3, 
[{file,"src/webmachine_resource.erl"},{line,186}]}, 
{webmachine_resource,do,3, 
[{file,"src/webmachine_resource.erl"},{line,142}]}, 
{webmachine_decision_core,resource_call,1, 
[{file,"src/webmachine_decision_core.erl"},{line,48}]}, 
{webmachine_decision_core,decision,1, 
[{file,"src/webmachine_decision_core.erl"},{line,558}]}, 
{webmachine_decision_core,handle_request,2, 
[{file,"src/webmachine_decision_core.erl"},{line,33}]}]}}</pre><P><HR><ADDRESS>mochiweb+webmachine web server</ADDRESS></body></html>%

################################################################## 
##################################################################

riak-repl status

realtime_enabled: "REDACTED" 
realtime_started: [] 
fullsync_enabled: [] 
fullsync_running: [] 
proxy_get_enabled: [] 
test3_ips: "REDACTED:9010" 
site2_ips: "REDACTED:9010" 
riak_repl_stat_ts: 1394050312 
server_bytes_sent: 341366 
server_bytes_recv: 130687 
server_connects: 0 
server_connect_errors: 0 
server_fullsyncs: 4 
client_bytes_sent: 1818584 
client_bytes_recv: 0 
client_connects: 0 
client_connect_errors: 22466 
client_redirect: 0 
objects_dropped_no_clients: 55 
objects_dropped_no_leader: 0 
objects_sent: 100 
objects_forwarded: 0 
elections_elected: 2 
elections_leader_changed: 2 
client_rx_kbps: [0,0,0,0,0,0,0,0] 
client_tx_kbps: [0,0,0,0,0,0,0,0] 
server_rx_kbps: [0,0,0,0,0,0,0,0] 
server_tx_kbps: [0,0,0,0,0,0,0,0] 
rt_source_errors: 0 
rt_sink_errors: 0 
rt_dirty: 0 
leader: 'REDACTED' 
leader_message_queue_len: 0 
leader_total_heap_size: 317811 
leader_heap_size: 317811 
leader_stack_size: 9 
leader_reductions: 5797454 
leader_garbage_collection: [{min_bin_vheap_size,46368}, 
{min_heap_size,233}, 
{fullsweep_after,0}, 
{minor_gcs,0}] 
local_leader_message_queue_len: 0 
local_leader_heap_size: 317811 
client_stats: [{<8181.1493.299>, 
{message_queue_len,0}, 
{status,[{node,'REDACTED'}, 
{site,"test3"}, 
{strategy,riak_repl_keylist_server}, 
{fullsync_worker,<8181.1495.299>}, 
{queue_pid,<8181.1503.299>}, 
{dropped_count,0}, 
{queue_length,0}, 
{queue_byte_size,0}, 
{queue_max_size,104857600}, 
{queue_percentage,0}, 
{queue_pending,0}, 
{queue_max_pending,5}, 
{state,wait_for_partition}]}}, 
{<8181.12107.297>, 
{message_queue_len,0}, 
{status,[{node,'REDACTED'}, 
{site,"site2"}, 
{strategy,riak_repl_keylist_server}, 
{fullsync_worker,<8181.12109.297>}, 
{queue_pid,<8181.12117.297>}, 
{dropped_count,0}, 
{queue_length,0}, 
{queue_byte_size,0}, 
{queue_max_size,104857600}, 
{queue_percentage,0}, 
{queue_pending,0}, 
{queue_max_pending,5}, 
{state,wait_for_partition}]}}] 
sinks: [] 
server_stats: [] 
sources: [] 
fullsync_coordinator: [] 
fullsync_coordinator_srv: [] 
cluster_name: <<"REDACTED">> 
cluster_leader: 'REDACTED' 
connected_clusters: [<<"REDACTED">>] 
realtime_queue_stats: [{bytes,33827}, 
{max_bytes,104857600}, 
{consumers,[{"cluster3", 
[{pending,55}, 
{unacked,0}, 
{drops,0}, 
{errs,0}]}]}, 
{overload_drops,0}] 
proxy_get: [{requester,[]},{provider,[]}] 
realtime_send_kbps: 0 
realtime_recv_kgbps: 0 
fullsync_send_kbps: 0 
fullsync_recv_kbps: 0
@lordnull
Copy link
Contributor

All relavent pr's have been merged.

@lukebakken
Copy link
Contributor Author

lukebakken commented Mar 11, 2014

Please refer to this ticket for details: https://basho.zendesk.com/agent/#/tickets/7372

There appears to be a discrepancy between the riak-repl status output and JSON generated for the /riak-repl/stats endpoint.

From riak-repl status, note that both replication sites are included in the status section:

realtime_enabled: [] 
realtime_started: [] 
fullsync_enabled: [] 
fullsync_running: [] 
proxy_get_enabled: [] 
site3_ips: "REDACTED:9010, REDACTED:9010" 
site2_ips: "REDACTED:9010, REDACTED:9010, REDACTED:9010, REDACTED:9010" 
riak_repl_stat_ts: 1394556973 
server_bytes_sent: 0 
server_bytes_recv: 0 
server_connects: 0 
server_connect_errors: 0 
server_fullsyncs: 0 
client_bytes_sent: 10446 
client_bytes_recv: 0 
client_connects: 2 
client_connect_errors: 0 
client_redirect: 1 
objects_dropped_no_clients: 0 
objects_dropped_no_leader: 0 
objects_sent: 0 
objects_forwarded: 0 
elections_elected: 0 
elections_leader_changed: 0 
client_rx_kbps: [0,0,0,0,0,0,0,0] 
client_tx_kbps: [0,0,0,0,0,0,0,0] 
server_rx_kbps: [0,0,0,0,0,0,0,0] 
server_tx_kbps: [0,0,0,0,0,0,0,0] 
rt_source_errors: 0 
rt_sink_errors: 0 
rt_dirty: 0 
leader: 'REDACTED' 
leader_message_queue_len: 0 
leader_total_heap_size: 233 
leader_heap_size: 233 
leader_stack_size: 9 
leader_reductions: 9178 
leader_garbage_collection: [{min_bin_vheap_size,46368}, 
{min_heap_size,233}, 
{fullsweep_after,0}, 
{minor_gcs,0}] 
local_leader_message_queue_len: 0 
local_leader_heap_size: 233 
client_stats: [{<8182.1945.0>, 
{message_queue_len,0}, 
{status, 
[{node,REDACTED'}, 
{site,"REDACTED"}, 
{strategy,riak_repl_keylist_client}, 
{fullsync_worker,<8182.2204.0>}, 
{put_pool_size,5}, 
{connected,"REDACTED",9010}, 
{cluster_name, 
<<"{'REDACTED',{1392,223599,291838}}">>}, 
{state,wait_for_fullsync}]}}, 
{<8182.1943.0>, 
{message_queue_len,0}, 
{status, 
[{node,REDACTED'}, 
{site,"REDACTED"}, 
{strategy,riak_repl_keylist_client}, 
{fullsync_worker,<8182.2186.0>}, 
{put_pool_size,5}, 
{connected,"REDACTED",9010}, 
{cluster_name, 
<<"{'REDACTED',{1392,223382,770341}}">>}, 
{state,wait_for_fullsync}]}}] 
sinks: [] 
server_stats: [] 
sources: [] 
fullsync_coordinator: [] 
fullsync_coordinator_srv: [] 
cluster_name: <<"undefined">> 
cluster_leader: 'REDACTED' 
connected_clusters: [] 
realtime_queue_stats: [{bytes,768}, 
{max_bytes,104857600}, 
{consumers,[]}, 
{overload_drops,0}] 
proxy_get: [{requester,[]},{provider,[]}] 
realtime_send_kbps: 0 
realtime_recv_kgbps: 0 
fullsync_send_kbps: 0 
fullsync_recv_kbps: 0

However, from the HTTP endpoint, only site3 is included (status does not appear to be an array as expected):

{ 
"realtime_enabled": "", 
"realtime_started": "", 
"fullsync_enabled": "", 
"fullsync_running": "", 
"cluster_name": "undefined", 
"cluster_leader": "REDACTED", 
"connected_clusters": [], 
"riak_repl_stat_ts": 1394556189, 
"server_bytes_sent": 0, 
"server_bytes_recv": 0, 
"server_connects": 0, 
"server_connect_errors": 0, 
"server_fullsyncs": 0, 
"client_bytes_sent": 10446, 
"client_bytes_recv": 0, 
"client_connects": 2, 
"client_connect_errors": 0, 
"client_redirect": 1, 
"objects_dropped_no_clients": 0, 
"objects_dropped_no_leader": 0, 
"objects_sent": 0, 
"objects_forwarded": 0, 
"elections_elected": 0, 
"elections_leader_changed": 0, 
"client_rx_kbps": [], 
"client_tx_kbps": [], 
"server_rx_kbps": [], 
"server_tx_kbps": [], 
"rt_source_errors": 0, 
"rt_sink_errors": 0, 
"rt_dirty": 0, 
"leader": "REDACTED", 
"leader_message_queue_len": 0, 
"leader_total_heap_size": 233, 
"leader_heap_size": 233, 
"leader_stack_size": 9, 
"leader_reductions": 4187, 
"leader_garbage_collection": { 
"min_bin_vheap_size": 46368, 
"min_heap_size": 233, 
"fullsweep_after": 0, 
"minor_gcs": 0 
}, 
"local_leader_message_queue_len": 0, 
"local_leader_heap_size": 233, 
"pid": "<0.1945.0>", 
"message_queue_len": 0, 
"status": { 
"node": "REDACTED", 
"site": "site3", 
"strategy": "riak_repl_keylist_client", 
"fullsync_worker": "<0.2204.0>", 
"put_pool_size": 5, 
"connected": "REDACTED:9010", 
"cluster_name": "{REDACTED',{1392,223599,291838}}", 
"state": "wait_for_fullsync" 
}, 
"sinks": "", 
"sources": "", 
"realtime_queue_stats": { 
"bytes": 768, 
"max_bytes": 104857600, 
"consumers": [], 
"overload_drops": 0 
}, 
"fullsync_coordinator": "", 
"fullsync_coordinator_srv": "", 
"proxy_get": { 
"requester": [], 
"provider": [] 
}, 
"realtime_send_kbps": 0, 
"realtime_recv_kgbps": 0, 
"fullsync_send_kbps": 0, 
"fullsync_recv_kbps": 0 
}

@cmeiklejohn
Copy link
Contributor

I'm closing this since we've already merged the fix.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants