-
-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[added] jsz nats and http monitoring endpoint for jetstream #1881
Conversation
Signed-off-by: Matthias Hanel <mh@synadia.com>
Signed-off-by: Matthias Hanel <mh@synadia.com>
Signed-off-by: Matthias Hanel <mh@synadia.com>
I am going to let @ripienaar take a look and respond in the am.. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this looks really great
So the main missing bits in my view:
- JS Message counts
- Commit times for ingesting messages
- Expired/acked messages
- Raft details around how many elections, how many times q was lost etc
Most of this data obviously doesnt exist now and we'd need to see about gathering it, this is already an amazing start though
server/monitor.go
Outdated
type StreamDetail struct { | ||
Name string `json:"name"` | ||
Cluster *ClusterInfo `json:"cluster,omitempty"` | ||
Config StreamConfig `json:"config,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
probably we can drop the config or make it optional, it's quite verbose and we'd need to handle huge amounts of them in theory, ditto consumer config
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
made it optional for streams and consumer.
I kept the one of jetstream itself as its small, only included once and helps put memory/storage numbers into context.
server/monitor.go
Outdated
StreamsOutOfQuorumCnt uint64 `json:"total_streams_out_quorum,omitempty"` | ||
ConsumersNonReplicatedCnt uint64 `json:"total_consumers_non_replicated,omitempty"` | ||
ConsumersInQuorumCnt uint64 `json:"total_consumers_in_quorum,omitempty"` | ||
ConsumersOutOfQuorumCnt uint64 `json:"total_consumers_out_quorum,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we need to think a bit about the in/out terminology here, not sure the distintion really matters to be honest, lets add a total count for now and later if we need we can add a count for specifics
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removed them for streams and consumer. stream/consumer count was already present
The change in consumer.go is to circumvent custom json marshaling. Signed-off-by: Matthias Hanel <mh@synadia.com>
@ripienaar output without config {
"server_id": "NDLU4RTFCRMQ2B6RKVDVSXJFKNRFBJG4OSXBCLTHMELJ4DZCLKCNHVX7",
"now": "2021-02-04T14:29:43.075648-05:00",
"config": {
"max_memory": 10485760,
"max_storage": 10485760,
"store_dir": "/var/folders/9h/6g_c9l6n6bb8gp331d_9y0_w0000gn/T/srv_7500365941842"
},
"memory": 0,
"storage": 66,
"api": {
"total": 5,
"errors": 0
},
"total_streams": 1,
"total_consumers": 1,
"total_messages": 1,
"total_message_bytes": 33,
"meta_cluster": {
"name": "cluster_name",
"leader": "server_5500",
"replicas": [
{
"name": "server_5500",
"current": true,
"active": 1607000
}
]
},
"account_details": [
{
"name": "ACC",
"id": "ACC",
"memory": 0,
"storage": 66,
"api": {
"total": 5,
"errors": 0
},
"stream_detail": [
{
"name": "my-stream-replicated",
"cluster": {
"name": "cluster_name",
"leader": "server_7500",
"replicas": [
{
"name": "server_5500",
"current": true,
"active": 503000
}
]
},
"state": {
"messages": 1,
"bytes": 33,
"first_seq": 1,
"first_ts": "2021-02-04T19:29:43.074964Z",
"last_seq": 1,
"last_ts": "2021-02-04T19:29:43.074964Z",
"consumer_count": 1
},
"consumer_detail": [
{
"stream_name": "my-stream-replicated",
"name": "my-consumer-replicated",
"created": "2021-02-04T14:29:43.008009-05:00",
"delivered": {
"consumer_seq": 0,
"stream_seq": 0
},
"ack_floor": {
"consumer_seq": 0,
"stream_seq": 0
},
"num_ack_pending": 0,
"num_redelivered": 0,
"num_waiting": 0,
"num_pending": 0,
"cluster": {
"name": "cluster_name",
"leader": "server_5500",
"replicas": [
{
"name": "server_5500",
"current": false,
"active": 1612466983075684000
}
]
}
}
]
}
]
}
]
} |
Signed-off-by: Matthias Hanel <mh@synadia.com>
LGTM, I am sure there's some iteration here but let me see if i can build something useful with this and we can tweak, thanks a lot |
@derekcollison RI is ok with this. how about you? |
If he is good them go ahead and merge. Thx |
The new endpoints are /jsz on http and "$SYS.REQ.SERVER.PING.JSZ" and "$SYS.REQ.SERVER.%s.JSZ". $SYS.REQ.ACCOUNT.%s.JSZ will only return info for the particular account Signed-off-by: Matthias Hanel <mh@synadia.com>
Signed-off-by: Matthias Hanel mh@synadia.com
@ripienaar format and functionality what you had in mind? Currently I page on account level
sample: