Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

USMON-718: Kafka fetch error code #25929

Open
wants to merge 44 commits into
base: main
Choose a base branch
from

Conversation

DanielLavie
Copy link
Contributor

@DanielLavie DanielLavie commented May 26, 2024

What does this PR do?

This PR improves the USM Kafka monitoring feature by:

  • Implementing support for parsing error codes from Kafka fetch responses
  • Incorporating support for error codes in Kafka request stats and when encoding Kafka aggregation to the protobuf

Motivation

Our aim is to include error codes in the USM RED metrics for the Kafka protocol. This serves as the initial step, with the subsequent step involving parsing Kafka produce responses to also extract the error codes.

Additional Notes

  • At present, the backend does not support non-HTTP error codes. Therefore, we won't be able to view these error codes in the UI until this issue is resolved.
  • Due to the need to support Kernel 4.14, I had to make some trade-offs in code clarity to accommodate it. We'll definitely need to create better documentation for the Kafka kernel state machine, both at a high level and within the code itself.
  • Load test results can be found here. There's an increase of ~37% in CPU usage in the Kafka codepath as can be seen in the profiler. The same method is used in the HTTP codepath, I couldn't find any good optimization to implement in this context:
image

Possible Drawbacks / Trade-offs

Describe how to test/QA your changes

@pr-commenter
Copy link

pr-commenter bot commented May 26, 2024

Regression Detector

Regression Detector Results

Run ID: afb4e4af-cd65-49c1-852e-63fdc8d936ad Metrics dashboard Target profiles

Baseline: eeb2b90
Comparison: 8a567f8

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

No significant changes in experiment optimization goals

Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%

There were no significant changes in experiment optimization goals at this confidence level and effect size tolerance.

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI links
uds_dogstatsd_to_api_cpu % cpu utilization +0.05 [-0.85, +0.96] Logs
tcp_dd_logs_filter_exclude ingress throughput +0.00 [-0.01, +0.01] Logs
uds_dogstatsd_to_api ingress throughput +0.00 [-0.00, +0.00] Logs
basic_py_check % cpu utilization -0.40 [-3.20, +2.39] Logs
otel_to_otel_logs ingress throughput -0.50 [-1.30, +0.31] Logs
idle memory utilization -0.51 [-0.55, -0.46] Logs
pycheck_1000_100byte_tags % cpu utilization -0.91 [-5.81, +3.99] Logs
file_tree memory utilization -1.12 [-1.19, -1.04] Logs
tcp_syslog_to_blackhole ingress throughput -4.43 [-16.94, +8.09] Logs

Explanation

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

@DanielLavie DanielLavie added qa/done Skip QA week as QA was done before merge and regressions are covered by tests team/usm The USM team changelog/no-changelog labels May 27, 2024
@DanielLavie DanielLavie marked this pull request as ready for review May 27, 2024 12:02
@DanielLavie DanielLavie requested review from a team as code owners May 27, 2024 12:02
Copy link

codecov bot commented May 27, 2024

Codecov Report

Attention: Patch coverage is 85.71429% with 2 lines in your changes missing coverage. Please review.

Project coverage is 42.50%. Comparing base (4dd3f74) to head (de2d9f8).

Current head de2d9f8 differs from pull request most recent head 3949098

Please upload reports for the commit 3949098 to get more accurate results.

Files Patch % Lines
pkg/network/encoding/marshal/usm_kafka.go 80.00% 1 Missing and 1 partial ⚠️
Additional details and impacted files
@@             Coverage Diff             @@
##             main   #25929       +/-   ##
===========================================
- Coverage   44.94%   42.50%    -2.45%     
===========================================
  Files        2354      256     -2098     
  Lines      272845    18657   -254188     
===========================================
- Hits       122639     7930   -114709     
+ Misses     140536    10368   -130168     
+ Partials     9670      359     -9311     
Flag Coverage Δ
amzn_aarch64 42.66% <85.71%> (-3.13%) ⬇️
centos_x86_64 42.66% <85.71%> (-3.04%) ⬇️
ubuntu_aarch64 42.66% <85.71%> (-3.13%) ⬇️
ubuntu_x86_64 42.68% <85.71%> (-3.11%) ⬇️
windows_amd64 46.35% <28.57%> (-4.42%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@pr-commenter
Copy link

pr-commenter bot commented May 27, 2024

Test changes on VM

Use this command from test-infra-definitions to manually test this PR changes on a VM:

inv create-vm --pipeline-id=37416707 --os-family=ubuntu

Note: This applies to commit 8a567f8

extra_debug("enqueue partition, #partitions left %d, records_count %d",
response->partitions_count,
response->transaction.records_count);
kafka_batch_enqueue_wrapper(kafka, tup, &response->transaction);
Copy link
Contributor

@vitkyrka vitkyrka May 28, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Won't queuing after every partition cause a large increase in events, and possible even cause the batching to be full if there are many partitions? For example, the "many partitions" test in the fetch v12 patchset may fail with this change.

Perhaps we could at the very least optimize this for the common case of no error by only queuing the event if the error code of the current partition is different from the error codes of the previous partitions? So the enqueue would be done in KAFKA_FETCH_RESPONSE_PARTITION_ERROR_CODE_START instead (and at the end).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The optimization is a great idea, I'll implement it

respParition := kmsg.NewFetchResponseTopicPartition()

for _, recordBatch := range recordBatches {
respParition.RecordBatches = recordBatch.AppendTo(respParition.RecordBatches)
}

respParition.ErrorCode = errorCode
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps it would be simpler to just do this assignment in the caller which needs it rather than passing it in as an argument to this function, since only a minority need it (similar to how AbortedTransactions is handled)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point

return makeFetchResponse(makeFetchResponseTopic(topic, partitions...))
},
numFetchedRecords: 5 * 4 * 3,
errorCode: 3,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we should have a test case which includes a fetch response with partitions both with and without errors?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will do

@@ -255,6 +255,84 @@ static __always_inline enum parse_result read_with_remainder(kafka_response_cont
return RET_DONE;
}

static __always_inline enum parse_result read_with_remainder_s16(kafka_response_context_t *response, pktbuf_t pkt,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the Fetch v12 PR, there is a version of this which avoids the copy/paste, so depending on which one is merged first we can resolve the conflicts in the favor of that version. So this version should be OK for this PR.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I removed this as I merged the Fetch v12 PR

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
changelog/no-changelog component/system-probe qa/done Skip QA week as QA was done before merge and regressions are covered by tests team/usm The USM team
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants