Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rtcp: add thread safety for prticipants_ data structure #188

Merged
merged 1 commit into from
Feb 6, 2023

Conversation

wowaser
Copy link
Contributor

@wowaser wowaser commented Jan 11, 2023

fix for #172

@jrsnen
Copy link
Member

jrsnen commented Jan 17, 2023

Hi and thanks! This is close to what is needed.

I looked up and shared_mutex was only introduce in C++17 which might cause problems for some of our users who use older versions of C++ standard (if I remember correctly, few have been using C++14). Would plain std::mutex be good enough?

Also, while most of it comes down to personal preference, I like explicitly locking and unlocking the mutexes using lock/unlock functions whenever possible just to it is easier to see which areas the mutex locks (the goal always being to lock as little code as possible to avoid delays). Could you at least replace the blocks created for locking the mutex so the code becomes clearer?

@wowaser
Copy link
Contributor Author

wowaser commented Jan 18, 2023

@jrsnen Sure, I will still use unique_lock(since c++11) as locking mechanism to avoid errors in the future

@wowaser
Copy link
Contributor Author

wowaser commented Jan 19, 2023

@jrsnen Take a look please
I used classic mutex locking/unlocking and lock guards. In a couple places I also used unique_locks for the purpose of unlocking praticipants_ if there is an error or premature return.

@jrsnen
Copy link
Member

jrsnen commented Feb 6, 2023

@wowaser this is great. Big thanks for this PR! Sorry, it took me so long to review it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants