Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

k6 Insights (1/2): Refactor request metadata output to a separate package #3201

Merged

Conversation

Blinkuu
Copy link
Contributor

@Blinkuu Blinkuu commented Jul 17, 2023

What?

This PR refactors the request metadata logic into a separate package so we can later reuse it in cloud output v1.

Why?

Due to cloud output v2 being at risk for v0.46.0, we decided to incorporate the new request metadata output into cloud output v1. This change is done to mitigate the risk of failing to deliver the new insights features.

Related PR(s)/Issue(s)

This PR is part of a PR chain. Next PR - #3202.

Previous PRs:

@Blinkuu Blinkuu force-pushed the refactor_request_metadata_output_to_separate_package branch from a0d553b to 47c78a5 Compare July 17, 2023 12:10
@codecov-commenter
Copy link

codecov-commenter commented Jul 17, 2023

Codecov Report

Merging #3201 (4d7449b) into master (5f5e374) will increase coverage by 0.01%.
The diff coverage is 89.68%.

❗ Current head 4d7449b differs from pull request most recent head 47c78a5. Consider uploading reports for the commit 47c78a5 to get more accurate results

@@            Coverage Diff             @@
##           master    #3201      +/-   ##
==========================================
+ Coverage   72.85%   72.87%   +0.01%     
==========================================
  Files         256      259       +3     
  Lines       19804    19810       +6     
==========================================
+ Hits        14429    14436       +7     
+ Misses       4474     4473       -1     
  Partials      901      901              
Flag Coverage Δ
ubuntu 72.80% <89.68%> (+<0.01%) ⬆️
windows 72.72% <89.68%> (+<0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
metrics/engine/ingester.go 91.48% <ø> (ø)
output/cloud/expv2/collect.go 92.59% <ø> (+0.51%) ⬆️
output/cloud/expv2/flush.go 91.04% <ø> (-0.85%) ⬇️
output/cloud/expv2/output.go 65.55% <14.28%> (-0.20%) ⬇️
cmd/run.go 74.27% <86.66%> (+0.11%) ⬆️
output/cloud/insights/collect.go 91.48% <91.48%> (ø)
metrics/sink.go 97.40% <96.29%> (+1.51%) ⬆️
execution/scheduler.go 93.09% <100.00%> (ø)
metrics/engine/engine.go 90.26% <100.00%> (+1.28%) ⬆️
metrics/sample.go 95.23% <100.00%> (ø)
... and 3 more

Copy link
Collaborator

@olegbespalov olegbespalov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It generally looks good. Left two minor comments 👍

Also, adding the @oleiade as the second reviewer from the core.

output/cloud/insights/flush.go Show resolved Hide resolved
@@ -156,14 +150,14 @@ func (o *Output) Start() error {
}
insightsClient := insights.NewClient(insightsClientConfig)

ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second)
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why has this increased ten times? 🤔

If we are not sure about the value, it seems better to make this configurable

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have increased this since 1 second to establish a connection is not that long, considering that our backend is in a single region, but the cloud tests can be distributed geographically. This can be inferred from RTT times across the globe.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, it still sounds like we should have these settings configurable to have a chance to adjust them quickly, along with some from the

insightsClientConfig := insights.ClientConfig{
IngesterHost: o.config.TracesHost.String,
Timeout: types.NewNullDuration(90*time.Second, false),
AuthConfig: insights.ClientAuthConfig{
Enabled: true,
TestRunID: testRunID,
Token: o.config.Token.String,
RequireTransportSecurity: true,
},
TLSConfig: insights.ClientTLSConfig{
Insecure: false,
},
RetryConfig: insights.ClientRetryConfig{
RetryableStatusCodes: `"UNKNOWN","INTERNAL","UNAVAILABLE","DEADLINE_EXCEEDED"`,
MaxAttempts: 3,
PerRetryTimeout: 30 * time.Second,
BackoffConfig: insights.ClientBackoffConfig{
Enabled: true,
JitterFraction: 0.1,
WaitBetween: 1 * time.Second,
},
},
}
insightsClient := insights.NewClient(insightsClientConfig)

But it could be a subject of the separated PR 👍

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ideally, yeah. However, these are already tuned based on a few experiments we did in the past. Let's leave this for the future.

@olegbespalov olegbespalov requested review from oleiade and removed request for mstoykov and codebien July 17, 2023 16:43
@olegbespalov olegbespalov added this to the v0.46.0 milestone Jul 18, 2023
Copy link
Member

@oleiade oleiade left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚀 🚀 🚀

@oleiade oleiade merged commit ac9c6bc into master Jul 19, 2023
21 checks passed
@oleiade oleiade deleted the refactor_request_metadata_output_to_separate_package branch July 19, 2023 10:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants