Skip to content

Replace mutex in config service with atomic config entry / semaphore#9

Merged
z4kn4fein merged 7 commits intomainfrom
rework-service-mutex
Jun 17, 2025
Merged

Replace mutex in config service with atomic config entry / semaphore#9
z4kn4fein merged 7 commits intomainfrom
rework-service-mutex

Conversation

@z4kn4fein
Copy link
Copy Markdown
Member

@z4kn4fein z4kn4fein commented Jun 13, 2025

Describe the purpose of your pull request

This PR refines the synchronization mechanism in ConfigService. Rather than a Mutex, we now use atomic operations on the in-memory config data and Semaphore to ensure only one HTTP call is active at a time.

Related issues (only if applicable)

Requirement checklist (only if applicable)

  • I have covered the applied changes with automated tests.
  • I have executed the full automated test set against my changes.
  • I have validated my changes against all supported platform versions.
  • I have read and accepted the contribution agreement.

@z4kn4fein z4kn4fein requested a review from a team as a code owner June 13, 2025 22:10
Comment thread src/fetch/service.rs
Comment thread Cargo.toml
Comment thread src/model/config.rs
laliconfigcat
laliconfigcat previously approved these changes Jun 14, 2025
Copy link
Copy Markdown
Collaborator

@chamons chamons left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TIL of arc-swap

I think the only thing that scares me is that unwrap in src/fetch/service.rs, but this is a bit in the weeds of multithreading programming past my expertise.

Should there be some sort of benchmark included to show that this improves contention performance? I could hack one up if needed, but it shouldn't be too hard. Make 100 or 200 readers in parallel should hit it?

@z4kn4fein
Copy link
Copy Markdown
Member Author

z4kn4fein commented Jun 16, 2025

@chamons I determined that the unwrap is safe there because the docs of Semaphore's aquire states that it returns with an error only if the Semaphore was canceled which we never do. Do you see issues besides that?

I made a benchmark (it was quite a journey, I struggled a bit to figure out async benchmark options), and my results are these:

old:              time:   [322.67 µs 326.87 µs 331.67 µs]
new:              time:   [103.19 µs 106.34 µs 109.96 µs]

The benchmark preps a simulated environment where a cache is in place to bypass the first HTTP request so we can see the real difference between the synchronized reads.
I've pushed the benchmark to this branch, so you can check it out if you'd like.

(My original baked "benchmark" script ran for 100000 concurrent tasks where the difference was much more noticeable 😄)

Copy link
Copy Markdown
Collaborator

@chamons chamons left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That seems like an overall win to me.

@chamons
Copy link
Copy Markdown
Collaborator

chamons commented Jun 16, 2025

I do wonder, if you increase the number of concurrent readers does the time go up exponentially or faster than linearly?

@z4kn4fein
Copy link
Copy Markdown
Member Author

@chamons I've added a steady increase in the reader count to the benchmark, these are the results:

Old:

100    time:   [145.67 µs 150.86 µs 155.88 µs]
1000   time:   [1.2553 ms 1.2826 ms 1.3099 ms]
10000  time:   [13.828 ms 14.247 ms 14.705 ms]
100000 time:   [218.45 ms 230.06 ms 241.82 ms]

New:

100    time:   [43.322 µs 44.457 µs 45.863 µs]
1000   time:   [445.83 µs 455.46 µs 466.94 µs]
10000  time:   [5.8118 ms 5.9303 ms 6.0545 ms]
100000 time:   [89.342 ms 90.955 ms 92.680 ms]

The execution times still increase faster than linearly but with a lower amount.

@z4kn4fein z4kn4fein merged commit 2fa1313 into main Jun 17, 2025
4 checks passed
@z4kn4fein z4kn4fein deleted the rework-service-mutex branch June 17, 2025 10:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants