-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
USMON-718: Kafka fetch error code #25929
base: main
Are you sure you want to change the base?
Conversation
…er the `kafka_continue_parse_response_loop` function
…T with error doesn't work for some reason, need to keep debugging the raw test
Regression DetectorRegression Detector ResultsRun ID: afb4e4af-cd65-49c1-852e-63fdc8d936ad Metrics dashboard Target profiles Baseline: eeb2b90 Performance changes are noted in the perf column of each table:
No significant changes in experiment optimization goalsConfidence level: 90.00% There were no significant changes in experiment optimization goals at this confidence level and effect size tolerance.
|
perf | experiment | goal | Δ mean % | Δ mean % CI | links |
---|---|---|---|---|---|
➖ | uds_dogstatsd_to_api_cpu | % cpu utilization | +0.05 | [-0.85, +0.96] | Logs |
➖ | tcp_dd_logs_filter_exclude | ingress throughput | +0.00 | [-0.01, +0.01] | Logs |
➖ | uds_dogstatsd_to_api | ingress throughput | +0.00 | [-0.00, +0.00] | Logs |
➖ | basic_py_check | % cpu utilization | -0.40 | [-3.20, +2.39] | Logs |
➖ | otel_to_otel_logs | ingress throughput | -0.50 | [-1.30, +0.31] | Logs |
➖ | idle | memory utilization | -0.51 | [-0.55, -0.46] | Logs |
➖ | pycheck_1000_100byte_tags | % cpu utilization | -0.91 | [-5.81, +3.99] | Logs |
➖ | file_tree | memory utilization | -1.12 | [-1.19, -1.04] | Logs |
➖ | tcp_syslog_to_blackhole | ingress throughput | -4.43 | [-16.94, +8.09] | Logs |
Explanation
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #25929 +/- ##
===========================================
- Coverage 44.94% 42.50% -2.45%
===========================================
Files 2354 256 -2098
Lines 272845 18657 -254188
===========================================
- Hits 122639 7930 -114709
+ Misses 140536 10368 -130168
+ Partials 9670 359 -9311
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
Test changes on VMUse this command from test-infra-definitions to manually test this PR changes on a VM: inv create-vm --pipeline-id=37416707 --os-family=ubuntu Note: This applies to commit 8a567f8 |
extra_debug("enqueue partition, #partitions left %d, records_count %d", | ||
response->partitions_count, | ||
response->transaction.records_count); | ||
kafka_batch_enqueue_wrapper(kafka, tup, &response->transaction); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Won't queuing after every partition cause a large increase in events, and possible even cause the batching to be full if there are many partitions? For example, the "many partitions" test in the fetch v12 patchset may fail with this change.
Perhaps we could at the very least optimize this for the common case of no error by only queuing the event if the error code of the current partition is different from the error codes of the previous partitions? So the enqueue would be done in KAFKA_FETCH_RESPONSE_PARTITION_ERROR_CODE_START
instead (and at the end).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The optimization is a great idea, I'll implement it
respParition := kmsg.NewFetchResponseTopicPartition() | ||
|
||
for _, recordBatch := range recordBatches { | ||
respParition.RecordBatches = recordBatch.AppendTo(respParition.RecordBatches) | ||
} | ||
|
||
respParition.ErrorCode = errorCode |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps it would be simpler to just do this assignment in the caller which needs it rather than passing it in as an argument to this function, since only a minority need it (similar to how AbortedTransactions
is handled)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point
return makeFetchResponse(makeFetchResponseTopic(topic, partitions...)) | ||
}, | ||
numFetchedRecords: 5 * 4 * 3, | ||
errorCode: 3, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we should have a test case which includes a fetch response with partitions both with and without errors?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will do
@@ -255,6 +255,84 @@ static __always_inline enum parse_result read_with_remainder(kafka_response_cont | |||
return RET_DONE; | |||
} | |||
|
|||
static __always_inline enum parse_result read_with_remainder_s16(kafka_response_context_t *response, pktbuf_t pkt, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the Fetch v12 PR, there is a version of this which avoids the copy/paste, so depending on which one is merged first we can resolve the conflicts in the favor of that version. So this version should be OK for this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I removed this as I merged the Fetch v12 PR
…onseTopicPartition function
…up performance - fixing issues
…up performance - fixing issues #2
What does this PR do?
This PR improves the USM Kafka monitoring feature by:
Motivation
Our aim is to include error codes in the USM RED metrics for the Kafka protocol. This serves as the initial step, with the subsequent step involving parsing Kafka produce responses to also extract the error codes.
Additional Notes
Possible Drawbacks / Trade-offs
Describe how to test/QA your changes