Skip to content

chore(crashtracker): use weaker mem ordering for OP_COUNTERS#1744

Merged
gh-worker-dd-mergequeue-cf854d[bot] merged 1 commit intomainfrom
yannham/mem-ordering-op-counters
Mar 18, 2026
Merged

chore(crashtracker): use weaker mem ordering for OP_COUNTERS#1744
gh-worker-dd-mergequeue-cf854d[bot] merged 1 commit intomainfrom
yannham/mem-ordering-op-counters

Conversation

@yannham
Copy link
Contributor

@yannham yannham commented Mar 17, 2026

What does this PR do?

This PR replaces a bunch of sequentially consistent atomic accesses on ops counters by weaker relaxed accesses, cleaning a leftover TODO.

Motivation

The motivation to use the weakest memory ordering applicable is two folds:

  1. Performance: relaxed accesses compile to normal, non-atomic loads and stores on standard platforms (x86_64 and arm64 in particular). Whether this particular change has any performance impact is less obvious.
  2. Readability: I think my main motivation is that I find it easier, at least as a reader, to reason about weaker orderings. For example, a relaxed access indicates that there's no other unsynchronized data that this atomic protect or interact with, which enables local reasoning (you don't have to care about what other threads might be doing). A sequentially consistent access is the converse: they lead to a global order which involves all other seqcst accesses to this atomic, which is a strong and far-reaching assumption.

Additional Notes

This atomic is a counter, which is the poster child for Relaxed ordering (you usually only need the atomicity). This counter doesn't protect or interact with unsynchronized memory, so there's no reason to use a stronger ordering.

How to test the change?

Should see no difference in behavior except maybe for performance.

@yannham yannham requested a review from a team as a code owner March 17, 2026 10:07
@pr-commenter
Copy link

pr-commenter bot commented Mar 17, 2026

Benchmarks

Comparison

Benchmark execution time: 2026-03-17 13:52:49

Comparing candidate commit 58bfd52 in PR branch yannham/mem-ordering-op-counters with baseline commit bb2b2bb in branch main.

Found 0 performance improvements and 1 performance regressions! Performance is the same for 58 metrics, 2 unstable metrics.

Explanation

This is an A/B test comparing a candidate commit's performance against that of a baseline commit. Performance changes are noted in the tables below as:

  • 🟩 = significantly better candidate vs. baseline
  • 🟥 = significantly worse candidate vs. baseline

We compute a confidence interval (CI) over the relative difference of means between metrics from the candidate and baseline commits, considering the baseline as the reference.

If the CI is entirely outside the configured SIGNIFICANT_IMPACT_THRESHOLD (or the deprecated UNCONFIDENCE_THRESHOLD), the change is considered significant.

Feel free to reach out to #apm-benchmarking-platform on Slack if you have any questions.

More details about the CI and significant changes

You can imagine this CI as a range of values that is likely to contain the true difference of means between the candidate and baseline commits.

CIs of the difference of means are often centered around 0%, because often changes are not that big:

---------------------------------(------|---^--------)-------------------------------->
                              -0.6%    0%  0.3%     +1.2%
                                 |          |        |
         lower bound of the CI --'          |        |
sample mean (center of the CI) -------------'        |
         upper bound of the CI ----------------------'

As described above, a change is considered significant if the CI is entirely outside the configured SIGNIFICANT_IMPACT_THRESHOLD (or the deprecated UNCONFIDENCE_THRESHOLD).

For instance, for an execution time metric, this confidence interval indicates a significantly worse performance:

----------------------------------------|---------|---(---------^---------)---------->
                                       0%        1%  1.3%      2.2%      3.1%
                                                  |   |         |         |
       significant impact threshold --------------'   |         |         |
                      lower bound of CI --------------'         |         |
       sample mean (center of the CI) --------------------------'         |
                      upper bound of CI ----------------------------------'

scenario:ip_address/quantize_peer_ip_address_benchmark

  • 🟥 execution_time [+366.190ns; +384.706ns] or [+7.217%; +7.582%]

Candidate

Candidate benchmark details

Group 1

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 58bfd52 1773754523 yannham/mem-ordering-op-counters
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
sql/obfuscate_sql_string execution_time 87.009µs 87.208µs ± 0.142µs 87.183µs ± 0.046µs 87.242µs 87.346µs 87.745µs 88.663µs 1.70% 6.253 56.559 0.16% 0.010µs 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
sql/obfuscate_sql_string execution_time [87.189µs; 87.228µs] or [-0.023%; +0.023%] None None None

Group 2

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 58bfd52 1773754523 yannham/mem-ordering-op-counters
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
normalization/normalize_trace/test_trace execution_time 237.600ns 248.153ns ± 14.091ns 242.641ns ± 2.875ns 248.730ns 287.132ns 292.479ns 298.882ns 23.18% 2.162 3.563 5.66% 0.996ns 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
normalization/normalize_trace/test_trace execution_time [246.200ns; 250.106ns] or [-0.787%; +0.787%] None None None

Group 3

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 58bfd52 1773754523 yannham/mem-ordering-op-counters
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
benching string interning on wordpress profile execution_time 159.715µs 160.553µs ± 0.360µs 160.515µs ± 0.161µs 160.688µs 160.995µs 161.201µs 164.192µs 2.29% 5.284 51.023 0.22% 0.025µs 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
benching string interning on wordpress profile execution_time [160.503µs; 160.603µs] or [-0.031%; +0.031%] None None None

Group 4

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 58bfd52 1773754523 yannham/mem-ordering-op-counters
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
write only interface execution_time 1.258µs 3.206µs ± 1.427µs 2.976µs ± 0.030µs 3.016µs 3.659µs 14.068µs 14.800µs 397.31% 7.331 54.943 44.41% 0.101µs 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
write only interface execution_time [3.009µs; 3.404µs] or [-6.170%; +6.170%] None None None

Group 5

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 58bfd52 1773754523 yannham/mem-ordering-op-counters
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
tags/replace_trace_tags execution_time 2.442µs 2.468µs ± 0.010µs 2.467µs ± 0.006µs 2.474µs 2.488µs 2.493µs 2.495µs 1.15% 0.569 0.174 0.39% 0.001µs 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
tags/replace_trace_tags execution_time [2.467µs; 2.470µs] or [-0.055%; +0.055%] None None None

Group 6

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 58bfd52 1773754523 yannham/mem-ordering-op-counters
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
benching deserializing traces from msgpack to their internal representation execution_time 49.090ms 49.488ms ± 1.234ms 49.317ms ± 0.045ms 49.372ms 49.583ms 54.790ms 62.788ms 27.32% 8.958 84.064 2.49% 0.087ms 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
benching deserializing traces from msgpack to their internal representation execution_time [49.317ms; 49.659ms] or [-0.346%; +0.346%] None None None

Group 7

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 58bfd52 1773754523 yannham/mem-ordering-op-counters
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
profile_add_sample_frames_x1000 execution_time 4.220ms 4.224ms ± 0.003ms 4.224ms ± 0.002ms 4.225ms 4.229ms 4.237ms 4.251ms 0.65% 3.904 25.817 0.08% 0.000ms 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
profile_add_sample_frames_x1000 execution_time [4.224ms; 4.225ms] or [-0.011%; +0.011%] None None None

Group 8

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 58bfd52 1773754523 yannham/mem-ordering-op-counters
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
ip_address/quantize_peer_ip_address_benchmark execution_time 5.367µs 5.449µs ± 0.052µs 5.439µs ± 0.012µs 5.449µs 5.571µs 5.576µs 5.577µs 2.55% 1.349 1.167 0.94% 0.004µs 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
ip_address/quantize_peer_ip_address_benchmark execution_time [5.442µs; 5.456µs] or [-0.131%; +0.131%] None None None

Group 9

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 58bfd52 1773754523 yannham/mem-ordering-op-counters
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
receiver_entry_point/report/2598 execution_time 3.439ms 3.466ms ± 0.016ms 3.463ms ± 0.007ms 3.472ms 3.497ms 3.525ms 3.542ms 2.30% 1.688 4.177 0.46% 0.001ms 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
receiver_entry_point/report/2598 execution_time [3.464ms; 3.469ms] or [-0.065%; +0.065%] None None None

Group 10

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 58bfd52 1773754523 yannham/mem-ordering-op-counters
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
normalization/normalize_name/normalize_name/Too-Long-.Too-Long-.Too-Long-.Too-Long-.Too-Long-.Too-Lo... execution_time 186.154µs 186.523µs ± 0.200µs 186.503µs ± 0.151µs 186.656µs 186.886µs 187.022µs 187.033µs 0.28% 0.409 -0.482 0.11% 0.014µs 1 200
normalization/normalize_name/normalize_name/Too-Long-.Too-Long-.Too-Long-.Too-Long-.Too-Long-.Too-Lo... throughput 5346660.776op/s 5361274.795op/s ± 5744.918op/s 5361831.468op/s ± 4354.789op/s 5366025.127op/s 5369876.524op/s 5371263.355op/s 5371899.273op/s 0.19% -0.405 -0.486 0.11% 406.227op/s 1 200
normalization/normalize_name/normalize_name/bad-name execution_time 17.917µs 18.053µs ± 0.038µs 18.055µs ± 0.022µs 18.076µs 18.113µs 18.161µs 18.172µs 0.65% 0.012 1.204 0.21% 0.003µs 1 200
normalization/normalize_name/normalize_name/bad-name throughput 55030555.832op/s 55391824.996op/s ± 115474.635op/s 55385572.013op/s ± 66941.342op/s 55459026.579op/s 55573678.013op/s 55674185.316op/s 55812496.277op/s 0.77% 0.008 1.207 0.21% 8165.290op/s 1 200
normalization/normalize_name/normalize_name/good execution_time 10.144µs 10.320µs ± 0.087µs 10.327µs ± 0.030µs 10.351µs 10.387µs 10.430µs 11.286µs 9.28% 6.568 73.073 0.84% 0.006µs 1 200
normalization/normalize_name/normalize_name/good throughput 88604638.050op/s 96909091.781op/s ± 780055.475op/s 96830488.862op/s ± 277927.330op/s 97196524.583op/s 98019673.014op/s 98228997.362op/s 98575877.288op/s 1.80% -5.788 62.577 0.80% 55158.252op/s 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
normalization/normalize_name/normalize_name/Too-Long-.Too-Long-.Too-Long-.Too-Long-.Too-Long-.Too-Lo... execution_time [186.495µs; 186.551µs] or [-0.015%; +0.015%] None None None
normalization/normalize_name/normalize_name/Too-Long-.Too-Long-.Too-Long-.Too-Long-.Too-Long-.Too-Lo... throughput [5360478.604op/s; 5362070.985op/s] or [-0.015%; +0.015%] None None None
normalization/normalize_name/normalize_name/bad-name execution_time [18.048µs; 18.058µs] or [-0.029%; +0.029%] None None None
normalization/normalize_name/normalize_name/bad-name throughput [55375821.322op/s; 55407828.670op/s] or [-0.029%; +0.029%] None None None
normalization/normalize_name/normalize_name/good execution_time [10.308µs; 10.332µs] or [-0.117%; +0.117%] None None None
normalization/normalize_name/normalize_name/good throughput [96800983.594op/s; 97017199.967op/s] or [-0.112%; +0.112%] None None None

Group 11

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 58bfd52 1773754523 yannham/mem-ordering-op-counters
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
benching serializing traces from their internal representation to msgpack execution_time 14.210ms 14.253ms ± 0.026ms 14.249ms ± 0.011ms 14.262ms 14.284ms 14.344ms 14.442ms 1.36% 2.961 16.108 0.18% 0.002ms 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
benching serializing traces from their internal representation to msgpack execution_time [14.249ms; 14.256ms] or [-0.025%; +0.025%] None None None

Group 12

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 58bfd52 1773754523 yannham/mem-ordering-op-counters
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
credit_card/is_card_number/ execution_time 3.896µs 3.913µs ± 0.003µs 3.913µs ± 0.002µs 3.915µs 3.918µs 3.919µs 3.921µs 0.20% -0.691 4.392 0.07% 0.000µs 1 200
credit_card/is_card_number/ throughput 255043037.016op/s 255564137.974op/s ± 186914.389op/s 255562905.045op/s ± 124555.366op/s 255680699.222op/s 255821777.029op/s 255880877.377op/s 256646579.085op/s 0.42% 0.704 4.466 0.07% 13216.843op/s 1 200
credit_card/is_card_number/ 3782-8224-6310-005 execution_time 80.189µs 80.771µs ± 0.264µs 80.761µs ± 0.179µs 80.927µs 81.225µs 81.417µs 81.866µs 1.37% 0.558 0.714 0.33% 0.019µs 1 200
credit_card/is_card_number/ 3782-8224-6310-005 throughput 12215054.306op/s 12380796.283op/s ± 40390.458op/s 12382246.954op/s ± 27368.202op/s 12411590.257op/s 12437807.523op/s 12455285.451op/s 12470513.270op/s 0.71% -0.535 0.650 0.33% 2856.037op/s 1 200
credit_card/is_card_number/ 378282246310005 execution_time 73.018µs 73.597µs ± 0.327µs 73.547µs ± 0.212µs 73.765µs 74.197µs 74.427µs 74.574µs 1.40% 0.620 -0.113 0.44% 0.023µs 1 200
credit_card/is_card_number/ 378282246310005 throughput 13409587.875op/s 13587743.881op/s ± 60256.775op/s 13596682.410op/s ± 39266.368op/s 13633933.589op/s 13671407.543op/s 13683962.962op/s 13695235.297op/s 0.72% -0.600 -0.147 0.44% 4260.797op/s 1 200
credit_card/is_card_number/37828224631 execution_time 3.889µs 3.911µs ± 0.011µs 3.913µs ± 0.002µs 3.914µs 3.917µs 3.934µs 3.994µs 2.08% 3.052 24.216 0.28% 0.001µs 1 200
credit_card/is_card_number/37828224631 throughput 250385006.131op/s 255720721.351op/s ± 714640.216op/s 255588817.989op/s ± 114715.288op/s 255719836.719op/s 256889465.674op/s 257022258.557op/s 257104601.840op/s 0.59% -2.912 23.106 0.28% 50532.694op/s 1 200
credit_card/is_card_number/378282246310005 execution_time 70.054µs 70.589µs ± 0.238µs 70.570µs ± 0.144µs 70.717µs 71.046µs 71.224µs 71.414µs 1.20% 0.651 0.513 0.34% 0.017µs 1 200
credit_card/is_card_number/378282246310005 throughput 14002910.436op/s 14166637.504op/s ± 47736.859op/s 14170381.784op/s ± 28952.269op/s 14198292.675op/s 14234090.505op/s 14258768.057op/s 14274616.274op/s 0.74% -0.629 0.478 0.34% 3375.506op/s 1 200
credit_card/is_card_number/37828224631000521389798 execution_time 53.051µs 53.147µs ± 0.040µs 53.146µs ± 0.029µs 53.172µs 53.217µs 53.236µs 53.262µs 0.22% 0.155 -0.249 0.08% 0.003µs 1 200
credit_card/is_card_number/37828224631000521389798 throughput 18775225.484op/s 18815666.099op/s ± 14291.423op/s 18816246.066op/s ± 10103.537op/s 18826937.832op/s 18837867.477op/s 18845200.927op/s 18849626.259op/s 0.18% -0.151 -0.251 0.08% 1010.556op/s 1 200
credit_card/is_card_number/x371413321323331 execution_time 6.429µs 6.439µs ± 0.006µs 6.439µs ± 0.003µs 6.442µs 6.450µs 6.455µs 6.460µs 0.32% 0.662 0.772 0.09% 0.000µs 1 200
credit_card/is_card_number/x371413321323331 throughput 154810743.866op/s 155293694.445op/s ± 136195.971op/s 155299094.413op/s ± 79265.829op/s 155377380.974op/s 155502350.093op/s 155534870.666op/s 155547474.561op/s 0.16% -0.656 0.759 0.09% 9630.509op/s 1 200
credit_card/is_card_number_no_luhn/ execution_time 3.896µs 3.912µs ± 0.003µs 3.912µs ± 0.001µs 3.914µs 3.916µs 3.919µs 3.928µs 0.41% 0.175 11.406 0.07% 0.000µs 1 200
credit_card/is_card_number_no_luhn/ throughput 254551894.590op/s 255592385.915op/s ± 177307.610op/s 255604710.636op/s ± 94739.610op/s 255696298.797op/s 255806203.061op/s 255860145.937op/s 256678046.027op/s 0.42% -0.148 11.429 0.07% 12537.541op/s 1 200
credit_card/is_card_number_no_luhn/ 3782-8224-6310-005 execution_time 64.881µs 65.152µs ± 0.120µs 65.132µs ± 0.072µs 65.222µs 65.373µs 65.467µs 65.637µs 0.78% 0.800 0.954 0.18% 0.008µs 1 200
credit_card/is_card_number_no_luhn/ 3782-8224-6310-005 throughput 15235270.165op/s 15348739.180op/s ± 28253.100op/s 15353347.319op/s ± 17085.457op/s 15367219.416op/s 15384986.562op/s 15402398.138op/s 15412767.937op/s 0.39% -0.787 0.920 0.18% 1997.796op/s 1 200
credit_card/is_card_number_no_luhn/ 378282246310005 execution_time 58.188µs 58.416µs ± 0.125µs 58.390µs ± 0.066µs 58.467µs 58.691µs 58.790µs 58.871µs 0.82% 1.213 1.586 0.21% 0.009µs 1 200
credit_card/is_card_number_no_luhn/ 378282246310005 throughput 16986334.001op/s 17118599.828op/s ± 36471.914op/s 17126210.014op/s ± 19391.855op/s 17143856.869op/s 17162264.987op/s 17177483.892op/s 17185672.305op/s 0.35% -1.200 1.545 0.21% 2578.954op/s 1 200
credit_card/is_card_number_no_luhn/37828224631 execution_time 3.891µs 3.914µs ± 0.003µs 3.915µs ± 0.002µs 3.917µs 3.919µs 3.922µs 3.928µs 0.35% -1.052 8.902 0.09% 0.000µs 1 200
credit_card/is_card_number_no_luhn/37828224631 throughput 254573001.597op/s 255471295.121op/s ± 228351.334op/s 255454327.192op/s ± 154027.679op/s 255626489.311op/s 255765398.368op/s 255874885.402op/s 256978819.858op/s 0.60% 1.079 9.057 0.09% 16146.878op/s 1 200
credit_card/is_card_number_no_luhn/378282246310005 execution_time 55.186µs 55.576µs ± 0.177µs 55.542µs ± 0.097µs 55.658µs 55.900µs 56.003µs 56.441µs 1.62% 0.968 2.107 0.32% 0.013µs 1 200
credit_card/is_card_number_no_luhn/378282246310005 throughput 17717628.767op/s 17993623.393op/s ± 57266.752op/s 18004270.433op/s ± 31599.426op/s 18028336.462op/s 18065493.828op/s 18101572.629op/s 18120577.728op/s 0.65% -0.938 1.982 0.32% 4049.371op/s 1 200
credit_card/is_card_number_no_luhn/37828224631000521389798 execution_time 53.060µs 53.143µs ± 0.038µs 53.142µs ± 0.027µs 53.167µs 53.203µs 53.220µs 53.347µs 0.39% 0.753 3.004 0.07% 0.003µs 1 200
credit_card/is_card_number_no_luhn/37828224631000521389798 throughput 18745057.107op/s 18817298.052op/s ± 13486.199op/s 18817443.173op/s ± 9511.920op/s 18827384.191op/s 18838443.408op/s 18842373.449op/s 18846519.232op/s 0.15% -0.744 2.954 0.07% 953.618op/s 1 200
credit_card/is_card_number_no_luhn/x371413321323331 execution_time 6.429µs 6.439µs ± 0.005µs 6.439µs ± 0.003µs 6.442µs 6.447µs 6.450µs 6.471µs 0.50% 1.105 4.846 0.08% 0.000µs 1 200
credit_card/is_card_number_no_luhn/x371413321323331 throughput 154540833.609op/s 155303056.032op/s ± 129350.864op/s 155313411.314op/s ± 73201.308op/s 155378687.653op/s 155498787.318op/s 155525002.236op/s 155551931.909op/s 0.15% -1.091 4.755 0.08% 9146.487op/s 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
credit_card/is_card_number/ execution_time [3.913µs; 3.913µs] or [-0.010%; +0.010%] None None None
credit_card/is_card_number/ throughput [255538233.437op/s; 255590042.510op/s] or [-0.010%; +0.010%] None None None
credit_card/is_card_number/ 3782-8224-6310-005 execution_time [80.735µs; 80.808µs] or [-0.045%; +0.045%] None None None
credit_card/is_card_number/ 3782-8224-6310-005 throughput [12375198.554op/s; 12386394.012op/s] or [-0.045%; +0.045%] None None None
credit_card/is_card_number/ 378282246310005 execution_time [73.552µs; 73.643µs] or [-0.062%; +0.062%] None None None
credit_card/is_card_number/ 378282246310005 throughput [13579392.872op/s; 13596094.891op/s] or [-0.061%; +0.061%] None None None
credit_card/is_card_number/37828224631 execution_time [3.909µs; 3.912µs] or [-0.039%; +0.039%] None None None
credit_card/is_card_number/37828224631 throughput [255621679.090op/s; 255819763.612op/s] or [-0.039%; +0.039%] None None None
credit_card/is_card_number/378282246310005 execution_time [70.556µs; 70.622µs] or [-0.047%; +0.047%] None None None
credit_card/is_card_number/378282246310005 throughput [14160021.634op/s; 14173253.373op/s] or [-0.047%; +0.047%] None None None
credit_card/is_card_number/37828224631000521389798 execution_time [53.142µs; 53.153µs] or [-0.011%; +0.011%] None None None
credit_card/is_card_number/37828224631000521389798 throughput [18813685.445op/s; 18817646.753op/s] or [-0.011%; +0.011%] None None None
credit_card/is_card_number/x371413321323331 execution_time [6.439µs; 6.440µs] or [-0.012%; +0.012%] None None None
credit_card/is_card_number/x371413321323331 throughput [155274818.993op/s; 155312569.897op/s] or [-0.012%; +0.012%] None None None
credit_card/is_card_number_no_luhn/ execution_time [3.912µs; 3.913µs] or [-0.010%; +0.010%] None None None
credit_card/is_card_number_no_luhn/ throughput [255567812.785op/s; 255616959.044op/s] or [-0.010%; +0.010%] None None None
credit_card/is_card_number_no_luhn/ 3782-8224-6310-005 execution_time [65.136µs; 65.169µs] or [-0.026%; +0.026%] None None None
credit_card/is_card_number_no_luhn/ 3782-8224-6310-005 throughput [15344823.572op/s; 15352654.788op/s] or [-0.026%; +0.026%] None None None
credit_card/is_card_number_no_luhn/ 378282246310005 execution_time [58.399µs; 58.434µs] or [-0.030%; +0.030%] None None None
credit_card/is_card_number_no_luhn/ 378282246310005 throughput [17113545.172op/s; 17123654.484op/s] or [-0.030%; +0.030%] None None None
credit_card/is_card_number_no_luhn/37828224631 execution_time [3.914µs; 3.915µs] or [-0.012%; +0.012%] None None None
credit_card/is_card_number_no_luhn/37828224631 throughput [255439647.822op/s; 255502942.420op/s] or [-0.012%; +0.012%] None None None
credit_card/is_card_number_no_luhn/378282246310005 execution_time [55.551µs; 55.600µs] or [-0.044%; +0.044%] None None None
credit_card/is_card_number_no_luhn/378282246310005 throughput [17985686.772op/s; 18001560.014op/s] or [-0.044%; +0.044%] None None None
credit_card/is_card_number_no_luhn/37828224631000521389798 execution_time [53.137µs; 53.148µs] or [-0.010%; +0.010%] None None None
credit_card/is_card_number_no_luhn/37828224631000521389798 throughput [18815428.995op/s; 18819167.110op/s] or [-0.010%; +0.010%] None None None
credit_card/is_card_number_no_luhn/x371413321323331 execution_time [6.438µs; 6.440µs] or [-0.012%; +0.012%] None None None
credit_card/is_card_number_no_luhn/x371413321323331 throughput [155285129.246op/s; 155320982.817op/s] or [-0.012%; +0.012%] None None None

Group 13

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 58bfd52 1773754523 yannham/mem-ordering-op-counters
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
sdk_test_data/rules-based execution_time 146.739µs 148.485µs ± 1.536µs 148.226µs ± 0.475µs 148.754µs 149.856µs 153.887µs 163.054µs 10.00% 5.710 45.427 1.03% 0.109µs 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
sdk_test_data/rules-based execution_time [148.272µs; 148.698µs] or [-0.143%; +0.143%] None None None

Group 14

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 58bfd52 1773754523 yannham/mem-ordering-op-counters
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
single_flag_killswitch/rules-based execution_time 190.406ns 192.765ns ± 1.902ns 192.651ns ± 1.168ns 193.724ns 196.349ns 197.938ns 202.702ns 5.22% 1.279 2.992 0.98% 0.134ns 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
single_flag_killswitch/rules-based execution_time [192.502ns; 193.029ns] or [-0.137%; +0.137%] None None None

Group 15

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 58bfd52 1773754523 yannham/mem-ordering-op-counters
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
profile_add_sample_timestamped_x1000 execution_time 4.207ms 4.213ms ± 0.008ms 4.212ms ± 0.002ms 4.214ms 4.216ms 4.220ms 4.323ms 2.66% 11.825 153.724 0.20% 0.001ms 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
profile_add_sample_timestamped_x1000 execution_time [4.211ms; 4.214ms] or [-0.027%; +0.027%] None None None

Group 16

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 58bfd52 1773754523 yannham/mem-ordering-op-counters
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
redis/obfuscate_redis_string execution_time 33.305µs 33.970µs ± 1.178µs 33.437µs ± 0.052µs 33.526µs 36.512µs 36.577µs 37.894µs 13.33% 1.736 1.137 3.46% 0.083µs 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
redis/obfuscate_redis_string execution_time [33.807µs; 34.133µs] or [-0.481%; +0.481%] None None None

Group 17

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 58bfd52 1773754523 yannham/mem-ordering-op-counters
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
two way interface execution_time 17.655µs 25.283µs ± 9.412µs 17.883µs ± 0.148µs 33.680µs 41.757µs 43.249µs 66.554µs 272.16% 0.960 0.623 37.13% 0.665µs 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
two way interface execution_time [23.979µs; 26.588µs] or [-5.159%; +5.159%] None None None

Group 18

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 58bfd52 1773754523 yannham/mem-ordering-op-counters
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
concentrator/add_spans_to_concentrator execution_time 12.961ms 13.000ms ± 0.014ms 13.000ms ± 0.009ms 13.009ms 13.022ms 13.036ms 13.048ms 0.37% 0.323 0.435 0.10% 0.001ms 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
concentrator/add_spans_to_concentrator execution_time [12.998ms; 13.002ms] or [-0.014%; +0.014%] None None None

Group 19

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 58bfd52 1773754523 yannham/mem-ordering-op-counters
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
profile_add_sample2_frames_x1000 execution_time 747.236µs 748.378µs ± 0.531µs 748.310µs ± 0.336µs 748.650µs 749.373µs 749.965µs 750.642µs 0.31% 1.163 2.299 0.07% 0.038µs 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
profile_add_sample2_frames_x1000 execution_time [748.304µs; 748.451µs] or [-0.010%; +0.010%] None None None

Group 20

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 58bfd52 1773754523 yannham/mem-ordering-op-counters
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
normalization/normalize_service/normalize_service/A0000000000000000000000000000000000000000000000000... execution_time 495.367µs 496.255µs ± 0.590µs 496.222µs ± 0.296µs 496.512µs 496.825µs 497.112µs 501.105µs 0.98% 4.326 31.708 0.12% 0.042µs 1 200
normalization/normalize_service/normalize_service/A0000000000000000000000000000000000000000000000000... throughput 1995588.640op/s 2015097.009op/s ± 2384.073op/s 2015226.830op/s ± 1200.398op/s 2016434.219op/s 2017812.102op/s 2018468.374op/s 2018706.456op/s 0.17% -4.273 31.177 0.12% 168.579op/s 1 200
normalization/normalize_service/normalize_service/Data🐨dog🐶 繋がっ⛰てて execution_time 369.655µs 370.278µs ± 0.345µs 370.235µs ± 0.179µs 370.457µs 370.735µs 371.233µs 372.938µs 0.73% 2.750 17.495 0.09% 0.024µs 1 200
normalization/normalize_service/normalize_service/Data🐨dog🐶 繋がっ⛰てて throughput 2681414.290op/s 2700673.864op/s ± 2509.699op/s 2700984.039op/s ± 1307.448op/s 2702079.573op/s 2703649.584op/s 2704918.858op/s 2705222.597op/s 0.16% -2.717 17.180 0.09% 177.462op/s 1 200
normalization/normalize_service/normalize_service/Test Conversion 0f Weird !@#$%^&**() Characters execution_time 169.753µs 170.181µs ± 0.145µs 170.190µs ± 0.095µs 170.275µs 170.405µs 170.476µs 170.698µs 0.30% -0.081 0.302 0.08% 0.010µs 1 200
normalization/normalize_service/normalize_service/Test Conversion 0f Weird !@#$%^&**() Characters throughput 5858301.847op/s 5876085.883op/s ± 4989.838op/s 5875769.427op/s ± 3265.983op/s 5879173.483op/s 5884238.151op/s 5886567.973op/s 5890913.704op/s 0.26% 0.087 0.298 0.08% 352.835op/s 1 200
normalization/normalize_service/normalize_service/[empty string] execution_time 36.849µs 37.055µs ± 0.113µs 37.080µs ± 0.078µs 37.142µs 37.211µs 37.285µs 37.323µs 0.65% -0.255 -0.904 0.30% 0.008µs 1 200
normalization/normalize_service/normalize_service/[empty string] throughput 26793284.993op/s 26987101.025op/s ± 82127.657op/s 26968542.090op/s ± 56897.264op/s 27055988.866op/s 27120116.971op/s 27134977.910op/s 27137888.750op/s 0.63% 0.264 -0.908 0.30% 5807.302op/s 1 200
normalization/normalize_service/normalize_service/test_ASCII execution_time 46.209µs 46.336µs ± 0.197µs 46.312µs ± 0.031µs 46.342µs 46.437µs 46.489µs 48.424µs 4.56% 9.005 86.296 0.42% 0.014µs 1 200
normalization/normalize_service/normalize_service/test_ASCII throughput 20650708.308op/s 21581824.412op/s ± 88278.322op/s 21592645.357op/s ± 14469.225op/s 21607607.148op/s 21625384.339op/s 21631681.608op/s 21641004.434op/s 0.22% -8.913 84.933 0.41% 6242.220op/s 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
normalization/normalize_service/normalize_service/A0000000000000000000000000000000000000000000000000... execution_time [496.173µs; 496.337µs] or [-0.016%; +0.016%] None None None
normalization/normalize_service/normalize_service/A0000000000000000000000000000000000000000000000000... throughput [2014766.600op/s; 2015427.419op/s] or [-0.016%; +0.016%] None None None
normalization/normalize_service/normalize_service/Data🐨dog🐶 繋がっ⛰てて execution_time [370.230µs; 370.326µs] or [-0.013%; +0.013%] None None None
normalization/normalize_service/normalize_service/Data🐨dog🐶 繋がっ⛰てて throughput [2700326.044op/s; 2701021.684op/s] or [-0.013%; +0.013%] None None None
normalization/normalize_service/normalize_service/Test Conversion 0f Weird !@#$%^&**() Characters execution_time [170.161µs; 170.201µs] or [-0.012%; +0.012%] None None None
normalization/normalize_service/normalize_service/Test Conversion 0f Weird !@#$%^&**() Characters throughput [5875394.339op/s; 5876777.426op/s] or [-0.012%; +0.012%] None None None
normalization/normalize_service/normalize_service/[empty string] execution_time [37.039µs; 37.071µs] or [-0.042%; +0.042%] None None None
normalization/normalize_service/normalize_service/[empty string] throughput [26975718.922op/s; 26998483.129op/s] or [-0.042%; +0.042%] None None None
normalization/normalize_service/normalize_service/test_ASCII execution_time [46.309µs; 46.363µs] or [-0.059%; +0.059%] None None None
normalization/normalize_service/normalize_service/test_ASCII throughput [21569589.885op/s; 21594058.938op/s] or [-0.057%; +0.057%] None None None

Baseline

Omitted due to size.

@github-actions
Copy link

Clippy Allow Annotation Report

Comparing clippy allow annotations between branches:

  • Base Branch: origin/main
  • PR Branch: origin/yannham/mem-ordering-op-counters

Summary by Rule

Rule Base Branch PR Branch Change

Annotation Counts by File

File Base Branch PR Branch Change

Annotation Stats by Crate

Crate Base Branch PR Branch Change
clippy-annotation-reporter 5 5 No change (0%)
datadog-ffe-ffi 1 1 No change (0%)
datadog-ipc 28 28 No change (0%)
datadog-live-debugger 6 6 No change (0%)
datadog-live-debugger-ffi 10 10 No change (0%)
datadog-profiling-replayer 4 4 No change (0%)
datadog-remote-config 3 3 No change (0%)
datadog-sidecar 59 59 No change (0%)
libdd-common 10 10 No change (0%)
libdd-common-ffi 12 12 No change (0%)
libdd-data-pipeline 5 5 No change (0%)
libdd-ddsketch 2 2 No change (0%)
libdd-dogstatsd-client 1 1 No change (0%)
libdd-profiling 13 13 No change (0%)
libdd-telemetry 19 19 No change (0%)
libdd-tinybytes 4 4 No change (0%)
libdd-trace-normalization 2 2 No change (0%)
libdd-trace-obfuscation 9 9 No change (0%)
libdd-trace-utils 15 15 No change (0%)
Total 208 208 No change (0%)

About This Report

This report tracks Clippy allow annotations for specific rules, showing how they've changed in this PR. Decreasing the number of these annotations generally improves code quality.

@yannham yannham force-pushed the yannham/mem-ordering-op-counters branch from 5630e20 to 15a5128 Compare March 17, 2026 13:10
@codecov-commenter
Copy link

codecov-commenter commented Mar 17, 2026

Codecov Report

❌ Patch coverage is 0% with 8 lines in your changes missing coverage. Please review.
✅ Project coverage is 71.49%. Comparing base (6a02f01) to head (58bfd52).
⚠️ Report is 2 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1744      +/-   ##
==========================================
- Coverage   71.59%   71.49%   -0.10%     
==========================================
  Files         430      430              
  Lines       63967    63976       +9     
==========================================
- Hits        45796    45740      -56     
- Misses      18171    18236      +65     
Components Coverage Δ
libdd-crashtracker 63.92% <0.00%> (+0.03%) ⬆️
libdd-crashtracker-ffi 18.19% <ø> (+0.46%) ⬆️
libdd-alloc 98.77% <ø> (ø)
libdd-data-pipeline 87.94% <ø> (ø)
libdd-data-pipeline-ffi 74.85% <ø> (ø)
libdd-common 79.73% <ø> (ø)
libdd-common-ffi 73.40% <ø> (ø)
libdd-telemetry 62.48% <ø> (+0.03%) ⬆️
libdd-telemetry-ffi 16.75% <ø> (ø)
libdd-dogstatsd-client 82.64% <ø> (ø)
datadog-ipc 80.35% <ø> (ø)
libdd-profiling 81.60% <ø> (+0.01%) ⬆️
libdd-profiling-ffi 63.65% <ø> (ø)
datadog-sidecar 33.51% <ø> (-1.02%) ⬇️
datdog-sidecar-ffi 12.56% <ø> (-4.45%) ⬇️
spawn-worker 54.69% <ø> (ø)
libdd-tinybytes 93.16% <ø> (ø)
libdd-trace-normalization 81.71% <ø> (ø)
libdd-trace-obfuscation 91.80% <ø> (ø)
libdd-trace-protobuf 68.25% <ø> (ø)
libdd-trace-utils 88.98% <ø> (ø)
datadog-tracer-flare 90.45% <ø> (ø)
libdd-log 74.69% <ø> (ø)
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@yannham yannham force-pushed the yannham/mem-ordering-op-counters branch from 15a5128 to 58bfd52 Compare March 17, 2026 13:35
@yannham yannham requested a review from gyuheon0h March 17, 2026 13:42
@@ -58,9 +57,7 @@ static OP_COUNTERS: [AtomicI64; OpTypes::SIZE as usize] = [ATOMIC_ZERO; OpTypes:
/// ATOMICITY:
/// This function is atomic.
pub fn begin_op(op: OpTypes) -> Result<(), CounterError> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if this is a bug (even before these changes.)

For begin_op

  1. If old == i64::MAX - 1, the new value becomes i64::MAX, which is still not overflow.
  2. If old == i64::MAX, fetch_add(1, ...) wraps and stores i64::MIN before we can report an error.

For end_op
If old == 0, we return an error, but the counter has already been decremented to -1

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But since we use these as diagnostic, and not critical for synchronization I think its not life-threatening?

Copy link
Contributor Author

@yannham yannham Mar 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, you're onto something. The first one is a classical problem with counters; IIRC there's specific handling in e.g. the implementation Arc (even if you do it right there could theoretically be concurrent fetch_adds between your fetch_add and the test, which would overflow). One possibility is to keep a "buffer zone": instead of checking for overflow tightly, you could for example test for old > i64::MAX / 2, and maybe reset to old upon overflow (it's almost impossible in practice that there are i64::MAX / 2 concurrent increments before the test). A clean fix requires an initial load and a compare-exchange, I fear, which is more costly for 99.99% of the code paths where you don't actually overflow.

I'm not sure the second is an issue here though, since the atomic is i64 and the check is old <= 0, it's probably ok for the counter to go far in the negative values - it's considered the same as being 0? Except maybe upon reading/reporting? Though, similarly, you could fix it with a more complex read-then-compare-exchange loop.

By the way, what do you think is a reasonable range for those counters in practice?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, Arc is basically pulling the MAX / 2 trick as well: https://doc.rust-lang.org/src/alloc/sync.rs.html#2390-2407

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, its a non issue for crashtracking. Each op will ever only be 0 or 1 technically. I was just thinking about the underlying logic.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, I guess the upper bound is the # of threads doing the same op at the same time. Mostly profiling operations.

Copy link
Contributor Author

@yannham yannham Mar 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see. Then indeed overflow is very theoretical, in fact straight up impossible, given each thread takes up some space for its stack, making the max number of live threads at any point in time quite smaller than i64::MAX. But yeah in general I would say that keeping a safe range of (i64::MAX negative values for underflow, and i64::MAX / 2 of upper values for overflow) is practical way to do it without hurting the happy path. The right way ™️ would be to load first, and only update (with a compare-exchange) if it's indeed not overflowing/underflowing the counter, but this is quite more expensive.

@dd-octo-sts
Copy link
Contributor

dd-octo-sts bot commented Mar 17, 2026

Artifact Size Benchmark Report

aarch64-alpine-linux-musl
Artifact Baseline Commit Change
/aarch64-alpine-linux-musl/lib/libdatadog_profiling.a 100.42 MB 100.42 MB +0% (+16 B) 👌
/aarch64-alpine-linux-musl/lib/libdatadog_profiling.so 8.70 MB 8.70 MB 0% (0 B) 👌
aarch64-unknown-linux-gnu
Artifact Baseline Commit Change
/aarch64-unknown-linux-gnu/lib/libdatadog_profiling.a 117.12 MB 117.12 MB -0% (-80 B) 👌
/aarch64-unknown-linux-gnu/lib/libdatadog_profiling.so 11.28 MB 11.28 MB 0% (0 B) 👌
libdatadog-x64-windows
Artifact Baseline Commit Change
/libdatadog-x64-windows/debug/dynamic/datadog_profiling_ffi.dll 27.19 MB 27.19 MB 0% (0 B) 👌
/libdatadog-x64-windows/debug/dynamic/datadog_profiling_ffi.lib 76.61 KB 76.61 KB 0% (0 B) 👌
/libdatadog-x64-windows/debug/dynamic/datadog_profiling_ffi.pdb 186.18 MB 186.16 MB -0% (-16.00 KB) 👌
/libdatadog-x64-windows/debug/static/datadog_profiling_ffi.lib 917.12 MB 917.12 MB 0% (0 B) 👌
/libdatadog-x64-windows/release/dynamic/datadog_profiling_ffi.dll 9.94 MB 9.94 MB 0% (0 B) 👌
/libdatadog-x64-windows/release/dynamic/datadog_profiling_ffi.lib 76.61 KB 76.61 KB 0% (0 B) 👌
/libdatadog-x64-windows/release/dynamic/datadog_profiling_ffi.pdb 24.80 MB 24.80 MB 0% (0 B) 👌
/libdatadog-x64-windows/release/static/datadog_profiling_ffi.lib 51.48 MB 51.48 MB 0% (0 B) 👌
libdatadog-x86-windows
Artifact Baseline Commit Change
/libdatadog-x86-windows/debug/dynamic/datadog_profiling_ffi.dll 22.99 MB 22.99 MB 0% (0 B) 👌
/libdatadog-x86-windows/debug/dynamic/datadog_profiling_ffi.lib 77.80 KB 77.80 KB 0% (0 B) 👌
/libdatadog-x86-windows/debug/dynamic/datadog_profiling_ffi.pdb 190.39 MB 190.39 MB 0% (0 B) 👌
/libdatadog-x86-windows/debug/static/datadog_profiling_ffi.lib 900.80 MB 900.80 MB 0% (0 B) 👌
/libdatadog-x86-windows/release/dynamic/datadog_profiling_ffi.dll 7.54 MB 7.54 MB 0% (0 B) 👌
/libdatadog-x86-windows/release/dynamic/datadog_profiling_ffi.lib 77.80 KB 77.80 KB 0% (0 B) 👌
/libdatadog-x86-windows/release/dynamic/datadog_profiling_ffi.pdb 26.54 MB 26.54 MB 0% (0 B) 👌
/libdatadog-x86-windows/release/static/datadog_profiling_ffi.lib 47.10 MB 47.10 MB 0% (0 B) 👌
x86_64-alpine-linux-musl
Artifact Baseline Commit Change
/x86_64-alpine-linux-musl/lib/libdatadog_profiling.a 87.62 MB 87.62 MB +0% (+456 B) 👌
/x86_64-alpine-linux-musl/lib/libdatadog_profiling.so 10.22 MB 10.22 MB 0% (0 B) 👌
x86_64-unknown-linux-gnu
Artifact Baseline Commit Change
/x86_64-unknown-linux-gnu/lib/libdatadog_profiling.a 109.99 MB 109.99 MB +0% (+776 B) 👌
/x86_64-unknown-linux-gnu/lib/libdatadog_profiling.so 11.97 MB 11.97 MB 0% (0 B) 👌

@gyuheon0h gyuheon0h changed the title chore(crashtracker): use weaker mem ordering for OP_COUTERS chore(crashtracker): use weaker mem ordering for OP_COUNTERS Mar 17, 2026
@gh-worker-dd-mergequeue-cf854d gh-worker-dd-mergequeue-cf854d bot merged commit fa18a2b into main Mar 18, 2026
93 checks passed
@gh-worker-dd-mergequeue-cf854d gh-worker-dd-mergequeue-cf854d bot deleted the yannham/mem-ordering-op-counters branch March 18, 2026 10:09
bwoebi added a commit that referenced this pull request Mar 20, 2026
…-unprocessed

* 'main' of github.com:DataDog/libdatadog:
  feat(sidecar): add thread mode as fallback connection for restricted environments (#1447)
  feat(profiling-ffi): ProfilesDictionary_insert_strs (#1764)
  chore(release): merge release branch to main (#1760)
  fix(libdd-crashtracker-ffi)!: add missing fields for endpoint configuration (#1758)
  ci: prevent running macos tests on release branches (#1765)
  chore(datadog-tracer-flare): remove unnecessary features/deps (#1761)
  fix(profiling-ffi): Windows extern statics need __declspec(dllimport) (#1468)
  feat(profiling): thread id/name as well-known strs (#1757)
  ci: switch to ephemeral branches (#1731)
  chore(crashtracker): use weaker mem ordering for OP_COUNTERS (#1744)
  refactor(trace-utils)!: change header name type to accept dynamic values (#1722)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants